title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 133. Hazelcast Instance Component
Chapter 133. Hazelcast Instance Component Available as of Camel version 2.7 The Hazelcast instance component is one of Camel Hazelcast Components which allows you to consume join/leave events of the cache instance in the cluster. Hazelcast makes sense in one single "server node", but it's extremly powerful in a clustered environment. This endpoint provides no producer! 133.1. Options The Hazelcast Instance component supports 3 options, which are listed below. Name Description Default Type hazelcastInstance (advanced) The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. HazelcastInstance hazelcastMode (advanced) The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Hazelcast Instance endpoint is configured using URI syntax: with the following path and query parameters: 133.1.1. Path Parameters (1 parameters): Name Description Default Type cacheName Required The name of the cache String 133.1.2. Query Parameters (16 parameters): Name Description Default Type reliable (common) Define if the endpoint will use a reliable Topic struct or not. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean defaultOperation (consumer) To specify a default operation to use, if no operation header has been provided. HazelcastOperation hazelcastInstance (consumer) The hazelcast instance reference which can be used for hazelcast endpoint. HazelcastInstance hazelcastInstanceName (consumer) The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. String pollingTimeout (consumer) Define the polling timeout of the Queue consumer in Poll mode 10000 long poolSize (consumer) Define the Pool size for Queue Consumer Executor 1 int queueConsumerMode (consumer) Define the Queue Consumer mode: Listen or Poll Listen HazelcastQueueConsumer Mode exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean concurrentConsumers (seda) To use concurrent consumers polling from the SEDA queue. 1 int onErrorDelay (seda) Milliseconds before consumer continues polling after an error has occurred. 1000 int pollTimeout (seda) The timeout used when consuming from the SEDA queue. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int transacted (seda) If set to true then the consumer runs in transaction mode, where the messages in the seda queue will only be removed if the transaction commits, which happens when the processing is complete. false boolean transferExchange (seda) If set to true the whole Exchange will be transfered. If header or body contains not serializable objects, they will be skipped. false boolean 133.2. Spring Boot Auto-Configuration The component supports 26 options, which are listed below. Name Description Default Type camel.component.hazelcast-atomicvalue.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-atomicvalue.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-instance.enabled Enable hazelcast-instance component true Boolean camel.component.hazelcast-instance.hazelcast-instance The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. The option is a com.hazelcast.core.HazelcastInstance type. String camel.component.hazelcast-instance.hazelcast-mode The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String camel.component.hazelcast-instance.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.hazelcast-list.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-list.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-map.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-map.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-multimap.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-multimap.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-queue.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-queue.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-replicatedmap.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-replicatedmap.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-ringbuffer.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-ringbuffer.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-seda.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-seda.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-set.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-set.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-topic.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-topic.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-topic.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-topic.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean 133.3. instance consumer - from("hazelcast-instance:foo") Here's a sample: fromF("hazelcast-%sfoo", HazelcastConstants.INSTANCE_PREFIX) .log("instance...") .choice() .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) .log("...added") .to("mock:added") .otherwise() .log("...removed") .to("mock:removed"); Each event provides the following information inside the message header: Header Variables inside the response message: Name Type Description CamelHazelcastListenerTime Long time of the event in millis CamelHazelcastListenerType String the map consumer sets here "instancelistener" CamelHazelcastListenerAction String type of event - here added or removed . CamelHazelcastInstanceHost String host name of the instance CamelHazelcastInstancePort Integer port number of the instance
[ "hazelcast-instance:cacheName", "The instance consumer fires if a new cache instance will join or leave the cluster.", "fromF(\"hazelcast-%sfoo\", HazelcastConstants.INSTANCE_PREFIX) .log(\"instance...\") .choice() .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) .log(\"...added\") .to(\"mock:added\") .otherwise() .log(\"...removed\") .to(\"mock:removed\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hazelcast-instance-component
Chapter 12. Managing AMQ Streams
Chapter 12. Managing AMQ Streams This chapter covers tasks to maintain a deployment of AMQ Streams. 12.1. Working with custom resources You can use oc commands to retrieve information and perform other operations on AMQ Streams custom resources. Using oc with the status subresource of a custom resource allows you to get the information about the resource. 12.1.1. Performing oc operations on custom resources Use oc commands, such as get , describe , edit , or delete , to perform operations on resource types. For example, oc get kafkatopics retrieves a list of all Kafka topics and oc get kafkas retrieves all deployed Kafka clusters. When referencing resource types, you can use both singular and plural names: oc get kafkas gets the same results as oc get kafka . You can also use the short name of the resource. Learning short names can save you time when managing AMQ Streams. The short name for Kafka is k , so you can also run oc get k to list all Kafka clusters. oc get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3 Table 12.1. Long and short names for each AMQ Streams resource AMQ Streams resource Long name Short name Kafka kafka k Kafka Topic kafkatopic kt Kafka User kafkauser ku Kafka Connect kafkaconnect kc Kafka Connect S2I kafkaconnects2i kcs2i Kafka Connector kafkaconnector kctr Kafka Mirror Maker kafkamirrormaker kmm Kafka Mirror Maker 2 kafkamirrormaker2 kmm2 Kafka Bridge kafkabridge kb Kafka Rebalance kafkarebalance kr 12.1.1.1. Resource categories Categories of custom resources can also be used in oc commands. All AMQ Streams custom resources belong to the category strimzi , so you can use strimzi to get all the AMQ Streams resources with one command. For example, running oc get strimzi lists all AMQ Streams custom resources in a given namespace. oc get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple The oc get strimzi -o name command returns all resource types and resource names. The -o name option fetches the output in the type/name format oc get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user You can combine this strimzi command with other commands. For example, you can pass it into a oc delete command to delete all resources in a single command. oc delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io "my-cluster" deleted kafkatopic.kafka.strimzi.io "kafka-apps" deleted kafkauser.kafka.strimzi.io "my-user" deleted Deleting all resources in a single operation might be useful, for example, when you are testing new AMQ Streams features. 12.1.1.2. Querying the status of sub-resources There are other values you can pass to the -o option. For example, by using -o yaml you get the output in YAML format. Usng -o json will return it as JSON. You can see all the options in oc get --help . One of the most useful options is the JSONPath support , which allows you to pass JSONPath expressions to query the Kubernetes API. A JSONPath expression can extract or navigate specific parts of any resource. For example, you can use the JSONPath expression {.status.listeners[?(@.type=="tls")].bootstrapServers} to get the bootstrap address from the status of the Kafka custom resource and use it in your Kafka clients. Here, the command finds the bootstrapServers value of the tls listeners. oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.type=="tls")].bootstrapServers}{"\n"}' my-cluster-kafka-bootstrap.myproject.svc:9093 By changing the type condition to @.type=="external" or @.type=="plain" you can also get the address of the other Kafka listeners. oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}' 192.168.1.247:9094 You can use jsonpath to extract any other property or group of properties from any custom resource. 12.1.2. AMQ Streams custom resource status information Several resources have a status property, as described in the following table. Table 12.2. Custom resource status properties AMQ Streams resource Schema reference Publishes status information on... Kafka Section 13.2.55, " KafkaStatus schema reference" The Kafka cluster. KafkaConnect Section 13.2.82, " KafkaConnectStatus schema reference" The Kafka Connect cluster, if deployed. KafkaConnectS2I Section 13.2.86, " KafkaConnectS2IStatus schema reference" The Kafka Connect cluster with Source-to-Image support, if deployed. KafkaConnector Section 13.2.122, " KafkaConnectorStatus schema reference" KafkaConnector resources, if deployed. KafkaMirrorMaker Section 13.2.109, " KafkaMirrorMakerStatus schema reference" The Kafka MirrorMaker tool, if deployed. KafkaTopic Section 13.2.89, " KafkaTopicStatus schema reference" Kafka topics in your Kafka cluster. KafkaUser Section 13.2.102, " KafkaUserStatus schema reference" Kafka users in your Kafka cluster. KafkaBridge Section 13.2.119, " KafkaBridgeStatus schema reference" The AMQ Streams Kafka Bridge, if deployed. The status property of a resource provides information on the resource's: Current state , in the status.conditions property Last observed generation , in the status.observedGeneration property The status property also provides resource-specific information. For example: KafkaStatus provides information on listener addresses, and the id of the Kafka cluster. KafkaConnectStatus provides the REST API endpoint for Kafka Connect connectors. KafkaUserStatus provides the user name of the Kafka user and the Secret in which their credentials are stored. KafkaBridgeStatus provides the HTTP address at which external client applications can access the Bridge service. A resource's current state is useful for tracking progress related to the resource achieving its desired state , as defined by the spec property. The status conditions provide the time and reason the state of the resource changed and details of events preventing or delaying the operator from realizing the resource's desired state. The last observed generation is the generation of the resource that was last reconciled by the Cluster Operator. If the value of observedGeneration is different from the value of metadata.generation , the operator has not yet processed the latest update to the resource. If these values are the same, the status information reflects the most recent changes to the resource. AMQ Streams creates and maintains the status of custom resources, periodically evaluating the current state of the custom resource and updating its status accordingly. When performing an update on a custom resource using oc edit , for example, its status is not editable. Moreover, changing the status would not affect the configuration of the Kafka cluster. Here we see the status property specified for a Kafka custom resource. Kafka custom resource with status apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # ... status: conditions: 1 - lastTransitionTime: 2021-07-23T23:46:57+0000 status: "True" type: Ready 2 observedGeneration: 4 3 listeners: 4 - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9092 type: plain - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9093 certificates: - | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- type: tls - addresses: - host: 172.29.49.180 port: 9094 certificates: - | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- type: external clusterId: CLUSTER-ID 5 # ... 1 Status conditions describe criteria related to the status that cannot be deduced from the existing resource information, or are specific to the instance of a resource. 2 The Ready condition indicates whether the Cluster Operator currently considers the Kafka cluster able to handle traffic. 3 The observedGeneration indicates the generation of the Kafka custom resource that was last reconciled by the Cluster Operator. 4 The listeners describe the current Kafka bootstrap addresses by type. 5 The Kafka cluster id. Important The address in the custom resource status for external listeners with type nodeport is currently not supported. Note The Kafka bootstrap addresses listed in the status do not signify that those endpoints or the Kafka cluster is in a ready state. Accessing status information You can access status information for a resource from the command line. For more information, see Section 12.1.3, "Finding the status of a custom resource" . 12.1.3. Finding the status of a custom resource This procedure describes how to find the status of a custom resource. Prerequisites An OpenShift cluster. The Cluster Operator is running. Procedure Specify the custom resource and use the -o jsonpath option to apply a standard JSONPath expression to select the status property: oc get kafka <kafka_resource_name> -o jsonpath='{.status}' This expression returns all the status information for the specified custom resource. You can use dot notation, such as status.listeners or status.observedGeneration , to fine-tune the status information you wish to see. Additional resources Section 12.1.2, "AMQ Streams custom resource status information" For more information about using JSONPath, see JSONPath support . 12.2. Pausing reconciliation of custom resources Sometimes it is useful to pause the reconciliation of custom resources managed by AMQ Streams Operators, so that you can perform fixes or make updates. If reconciliations are paused, any changes made to custom resources are ignored by the Operators until the pause ends. If you want to pause reconciliation of a custom resource, set the strimzi.io/pause-reconciliation annotation to true in its configuration. This instructs the appropriate Operator to pause reconciliation of the custom resource. For example, you can apply the annotation to the KafkaConnect resource so that reconciliation by the Cluster Operator is paused. You can also create a custom resource with the pause annotation enabled. The custom resource is created, but it is ignored. Prerequisites The AMQ Streams Operator that manages the custom resource is running. Procedure Annotate the custom resource in OpenShift, setting pause-reconciliation to true : oc annotate KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE strimzi.io/pause-reconciliation="true" For example, for the KafkaConnect custom resource: oc annotate KafkaConnect my-connect strimzi.io/pause-reconciliation="true" Check that the status conditions of the custom resource show a change to ReconciliationPaused : oc describe KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE The type condition changes to ReconciliationPaused at the lastTransitionTime . Example custom resource with a paused reconciliation condition type apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: "true" strimzi.io/use-connector-resources: "true" creationTimestamp: 2021-03-12T10:47:11Z #... spec: # ... status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: "True" type: ReconciliationPaused Resuming from pause To resume reconciliation, you can set the annotation to false , or remove the annotation. Additional resources Customizing OpenShift resources Finding the status of a custom resource 12.3. Manually starting rolling updates of Kafka and ZooKeeper clusters AMQ Streams supports the use of annotations on StatefulSet and Pod resources to manually trigger a rolling update of Kafka and ZooKeeper clusters through the Cluster Operator. Rolling updates restart the pods of the resource with new ones. Manually performing a rolling update on a specific pod or set of pods from the same StatefulSet is usually only required in exceptional circumstances. However, rather than deleting the pods directly, if you perform the rolling update through the Cluster Operator you ensure that: The manual deletion of the pod does not conflict with simultaneous Cluster Operator operations, such as deleting other pods in parallel. The Cluster Operator logic handles the Kafka configuration specifications, such as the number of in-sync replicas. 12.3.1. Prerequisites To perform a manual rolling update, you need a running Cluster Operator and Kafka cluster. See the Deploying and Upgrading AMQ Streams on OpenShift guide for instructions on running a: Cluster Operator Kafka cluster 12.3.2. Performing a rolling update using a StatefulSet annotation This procedure describes how to manually trigger a rolling update of an existing Kafka cluster or ZooKeeper cluster using an OpenShift StatefulSet annotation. Procedure Find the name of the StatefulSet that controls the Kafka or ZooKeeper pods you want to manually update. For example, if your Kafka cluster is named my-cluster , the corresponding StatefulSet names are my-cluster-kafka and my-cluster-zookeeper . Annotate the StatefulSet resource in OpenShift. Use oc annotate : oc annotate statefulset cluster-name -kafka strimzi.io/manual-rolling-update=true oc annotate statefulset cluster-name -zookeeper strimzi.io/manual-rolling-update=true Wait for the reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated StatefulSet is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of all the pods is complete, the annotation is removed from the StatefulSet . 12.3.3. Performing a rolling update using a Pod annotation This procedure describes how to manually trigger a rolling update of an existing Kafka cluster or ZooKeeper cluster using an OpenShift Pod annotation. When multiple pods from the same StatefulSet are annotated, consecutive rolling updates are performed within the same reconciliation run. Procedure Find the name of the Kafka or ZooKeeper Pod you want to manually update. For example, if your Kafka cluster is named my-cluster , the corresponding Pod names are my-cluster-kafka-index and my-cluster-zookeeper-index . The index starts at zero and ends at the total number of replicas. Annotate the Pod resource in OpenShift. Use oc annotate : oc annotate pod cluster-name -kafka- index strimzi.io/manual-rolling-update=true oc annotate pod cluster-name -zookeeper- index strimzi.io/manual-rolling-update=true Wait for the reconciliation to occur (every two minutes by default). A rolling update of the annotated Pod is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of a pod is complete, the annotation is removed from the Pod . 12.4. Discovering services using labels and annotations Service discovery makes it easier for client applications running in the same OpenShift cluster as AMQ Streams to interact with a Kafka cluster. A service discovery label and annotation is generated for services used to access the Kafka cluster: Internal Kafka bootstrap service HTTP Bridge service The label helps to make the service discoverable, and the annotation provides connection details that a client application can use to make the connection. The service discovery label, strimzi.io/discovery , is set as true for the Service resources. The service discovery annotation has the same key, providing connection details in JSON format for each service. Example internal Kafka bootstrap service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 9092, "tls" : false, "protocol" : "kafka", "auth" : "scram-sha-512" }, { "port" : 9093, "tls" : true, "protocol" : "kafka", "auth" : "tls" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: "true" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #... Example HTTP Bridge service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 8080, "tls" : false, "auth" : "none", "protocol" : "http" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: "true" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service 12.4.1. Returning connection details on services You can find the services by specifying the discovery label when fetching services from the command line or a corresponding API call. oc get service -l strimzi.io/discovery=true The connection details are returned when retrieving the service discovery label. 12.5. Recovering a cluster from persistent volumes You can recover a Kafka cluster from persistent volumes (PVs) if they are still present. You might want to do this, for example, after: A namespace was deleted unintentionally A whole OpenShift cluster is lost, but the PVs remain in the infrastructure 12.5.1. Recovery from namespace deletion Recovery from namespace deletion is possible because of the relationship between persistent volumes and namespaces. A PersistentVolume (PV) is a storage resource that lives outside of a namespace. A PV is mounted into a Kafka pod using a PersistentVolumeClaim (PVC), which lives inside a namespace. The reclaim policy for a PV tells a cluster how to act when a namespace is deleted. If the reclaim policy is set as: Delete (default), PVs are deleted when PVCs are deleted within a namespace Retain , PVs are not deleted when a namespace is deleted To ensure that you can recover from a PV if a namespace is deleted unintentionally, the policy must be reset from Delete to Retain in the PV specification using the persistentVolumeReclaimPolicy property: apiVersion: v1 kind: PersistentVolume # ... spec: # ... persistentVolumeReclaimPolicy: Retain Alternatively, PVs can inherit the reclaim policy of an associated storage class. Storage classes are used for dynamic volume allocation. By configuring the reclaimPolicy property for the storage class, PVs that use the storage class are created with the appropriate reclaim policy. The storage class is configured for the PV using the storageClassName property. apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # ... # ... reclaimPolicy: Retain apiVersion: v1 kind: PersistentVolume # ... spec: # ... storageClassName: gp2-retain Note If you are using Retain as the reclaim policy, but you want to delete an entire cluster, you need to delete the PVs manually. Otherwise they will not be deleted, and may cause unnecessary expenditure on resources. 12.5.2. Recovery from loss of an OpenShift cluster When a cluster is lost, you can use the data from disks/volumes to recover the cluster if they were preserved within the infrastructure. The recovery procedure is the same as with namespace deletion, assuming PVs can be recovered and they were created manually. 12.5.3. Recovering a deleted cluster from persistent volumes This procedure describes how to recover a deleted cluster from persistent volumes (PVs). In this situation, the Topic Operator identifies that topics exist in Kafka, but the KafkaTopic resources do not exist. When you get to the step to recreate your cluster, you have two options: Use Option 1 when you can recover all KafkaTopic resources. The KafkaTopic resources must therefore be recovered before the cluster is started so that the corresponding topics are not deleted by the Topic Operator. Use Option 2 when you are unable to recover all KafkaTopic resources. In this case, you deploy your cluster without the Topic Operator, delete the Topic Operator topic store metadata, and then redeploy the Kafka cluster with the Topic Operator so it can recreate the KafkaTopic resources from the corresponding topics. Note If the Topic Operator is not deployed, you only need to recover the PersistentVolumeClaim (PVC) resources. Before you begin In this procedure, it is essential that PVs are mounted into the correct PVC to avoid data corruption. A volumeName is specified for the PVC and this must match the name of the PV. For more information, see: Persistent Volume Claim naming JBOD and Persistent Volume Claims Note The procedure does not include recovery of KafkaUser resources, which must be recreated manually. If passwords and certificates need to be retained, secrets must be recreated before creating the KafkaUser resources. Procedure Check information on the PVs in the cluster: oc get pv Information is presented for PVs with data. Example output showing columns important to this procedure: NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2 NAME shows the name of each PV. RECLAIM POLICY shows that PVs are retained . CLAIM shows the link to the original PVCs. Recreate the original namespace: oc create namespace myproject Recreate the original PVC resource specifications, linking the PVCs to the appropriate PV: For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c Edit the PV specifications to delete the claimRef properties that bound the original PVC. For example: apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: "<date>" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: "39431" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem In the example, the following properties are deleted: claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea Deploy the Cluster Operator. oc create -f install/cluster-operator -n my-project Recreate your cluster. Follow the steps depending on whether or not you have all the KafkaTopic resources needed to recreate your cluster. Option 1 : If you have all the KafkaTopic resources that existed before you lost your cluster, including internal topics such as committed offsets from __consumer_offsets : Recreate all KafkaTopic resources. It is essential that you recreate the resources before deploying the cluster, or the Topic Operator will delete the topics. Deploy the Kafka cluster. For example: oc apply -f kafka.yaml Option 2 : If you do not have all the KafkaTopic resources that existed before you lost your cluster: Deploy the Kafka cluster, as with the first option, but without the Topic Operator by removing the topicOperator property from the Kafka resource before deploying. If you include the Topic Operator in the deployment, the Topic Operator will delete all the topics. Delete the internal topic store topics from the Kafka cluster: oc run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:1.8.4 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete The command must correspond to the type of listener and authentication used to access the Kafka cluster. Enable the Topic Operator by redeploying the Kafka cluster with the topicOperator property to recreate the KafkaTopic resources. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} 1 #... 1 Here we show the default configuration, which has no additional properties. You specify the required configuration using the properties described in Section 13.2.45, " EntityTopicOperatorSpec schema reference" . Verify the recovery by listing the KafkaTopic resources: oc get KafkaTopic 12.6. Setting limits on brokers using the Kafka Static Quota plugin Important The Kafka Static Quota plugin is a Technology Preview only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Use the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. You enable the plugin and set limits by configuring the Kafka resource. You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers. You can set byte-rate thresholds for producer and consumer bandwidth. The total limit is distributed across all clients accessing the broker. For example, you can set a byte-rate threshold of 40 MBps for producers. If two producers are running, they are each limited to a throughput of 20 MBps. Storage quotas throttle Kafka disk storage limits between a soft limit and hard limit. The limits apply to all available disk space. Producers are slowed gradually between the soft and hard limit. The limits prevent disks filling up too quickly and exceeding their capacity. Full disks can lead to issues that are hard to rectify. The hard limit is the maximum storage limit. Note For JBOD storage, the limit applies across all disks. If a broker is using two 1 TB disks and the quota is 1.1 TB, one disk might fill and the other disk will be almost empty. Prerequisites The Cluster Operator that manages the Kafka cluster is running. Procedure Add the plugin properties to the config of the Kafka resource. The plugin properties are shown in this example configuration. Example Kafka Static Quota plugin configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6 1 Loads the Kafka Static Quota plugin. 2 Sets the producer byte-rate threshold. 1 MBps in this example. 3 Sets the consumer byte-rate threshold. 1 MBps in this example. 4 Sets the lower soft limit for storage. 400 GB in this example. 5 Sets the higher hard limit for storage. 500 GB in this example. 6 Sets the interval in seconds between checks on storage. 5 seconds in this example. You can set this to 0 to disable the check. Update the resource. oc apply -f KAFKA-CONFIG-FILE Additional resources Kafka broker configuration tuning Setting user quotas 12.7. Tuning Kafka configuration Use configuration properties to optimize the performance of Kafka brokers, producers and consumers. A minimum set of configuration properties is required, but you can add or adjust properties to change how producers and consumers interact with Kafka brokers. For example, you can tune latency and throughput of messages so that clients can respond to data in real time. You might start by analyzing metrics to gauge where to make your initial configurations, then make incremental changes and further comparisons of metrics until you have the configuration you need. Additional resources Apache Kafka documentation 12.7.1. Kafka broker configuration tuning Use configuration properties to optimize the performance of Kafka brokers. You can use standard Kafka broker configuration options, except for properties managed directly by AMQ Streams. 12.7.1.1. Basic broker configuration Certain broker configuration options are managed directly by AMQ Streams, driven by the Kafka custom resource specification: broker.id is the ID of the Kafka broker log.dirs are the directories for log data zookeeper.connect is the configuration to connect Kafka with ZooKeeper listener exposes the Kafka cluster to clients authorization mechanisms allow or decline actions executed by users authentication mechanisms prove the identity of users requiring access to Kafka Broker IDs start from 0 (zero) and correspond to the number of broker replicas. Log directories are mounted to /var/lib/kafka/data/kafka-log IDX based on the spec.kafka.storage configuration in the Kafka custom resource. IDX is the Kafka broker pod index. As such, you cannot configure these options through the config property of the Kafka custom resource. For a list of exclusions, see the KafkaClusterSpec schema reference . However, a typical broker configuration will include settings for properties related to topics, threads and logs. Basic broker configuration properties # ... num.partitions=1 default.replication.factor=3 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 num.network.threads=3 num.io.threads=8 num.recovery.threads.per.data.dir=1 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 group.initial.rebalance.delay.ms=0 zookeeper.connection.timeout.ms=6000 # ... 12.7.1.2. Replicating topics for high availability Basic topic properties set the default number of partitions and replication factor for topics, which will apply to topics that are created without these properties being explicitly set, including when topics are created automatically. # ... num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 replica.fetch.max.bytes=1048576 # ... The auto.create.topics.enable property is enabled by default so that topics that do not already exist are created automatically when needed by producers and consumers. If you are using automatic topic creation, you can set the default number of partitions for topics using num.partitions . Generally, however, this property is disabled so that more control is provided over topics through explicit topic creation For example, you can use the AMQ Streams KafkaTopic resource or applications to create topics. For high availability environments, it is advisable to increase the replication factor to at least 3 for topics and set the minimum number of in-sync replicas required to 1 less than the replication factor. For topics created using the KafkaTopic resource, the replication factor is set using spec.replicas . For data durability , you should also set min.insync.replicas in your topic configuration and message delivery acknowledgments using acks=all in your producer configuration. Use replica.fetch.max.bytes to set the maximum size, in bytes, of messages fetched by each follower that replicates the leader partition. Change this value according to the average message size and throughput. When considering the total memory allocation required for read/write buffering, the memory available must also be able to accommodate the maximum replicated message size when multiplied by all followers. The delete.topic.enable property is enabled by default to allow topics to be deleted. In a production environment, you should disable this property to avoid accidental topic deletion, resulting in data loss. You can, however, temporarily enable it and delete topics and then disable it again. If delete.topic.enable is enabled, you can delete topics using the KafkaTopic resource. # ... auto.create.topics.enable=false delete.topic.enable=true # ... 12.7.1.3. Internal topic settings for transactions and commits If you are using transactions to enable atomic writes to partitions from producers, the state of the transactions is stored in the internal __transaction_state topic. By default, the brokers are configured with a replication factor of 3 and a minimum of 2 in-sync replicas for this topic, which means that a minimum of three brokers are required in your Kafka cluster. # ... transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 # ... Similarly, the internal __consumer_offsets topic, which stores consumer state, has default settings for the number of partitions and replication factor. # ... offsets.topic.num.partitions=50 offsets.topic.replication.factor=3 # ... Do not reduce these settings in production. You can increase the settings in a production environment. As an exception, you might want to reduce the settings in a single-broker test environment. 12.7.1.4. Improving request handling throughput by increasing I/O threads Network threads handle requests to the Kafka cluster, such as produce and fetch requests from client applications. Produce requests are placed in a request queue. Responses are placed in a response queue. The number of network threads should reflect the replication factor and the levels of activity from client producers and consumers interacting with the Kafka cluster. If you are going to have a lot of requests, you can increase the number of threads, using the amount of time threads are idle to determine when to add more threads. To reduce congestion and regulate the request traffic, you can limit the number of requests allowed in the request queue before the network thread is blocked. I/O threads pick up requests from the request queue to process them. Adding more threads can improve throughput, but the number of CPU cores and disk bandwidth imposes a practical upper limit. At a minimum, the number of I/O threads should equal the number of storage volumes. # ... num.network.threads=3 1 queued.max.requests=500 2 num.io.threads=8 3 num.recovery.threads.per.data.dir=1 4 # ... 1 The number of network threads for the Kafka cluster. 2 The number of requests allowed in the request queue. 3 The number of I/O threads for a Kafka broker. 4 The number of threads used for log loading at startup and flushing at shutdown. Configuration updates to the thread pools for all brokers might occur dynamically at the cluster level. These updates are restricted to between half the current size and twice the current size. Note Kafka broker metrics can help with working out the number of threads required. For example, metrics for the average time network threads are idle ( kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent ) indicate the percentage of resources used. If there is 0% idle time, all resources are in use, which means that adding more threads might be beneficial. If threads are slow or limited due to the number of disks, you can try increasing the size of the buffers for network requests to improve throughput: # ... replica.socket.receive.buffer.bytes=65536 # ... And also increase the maximum number of bytes Kafka can receive: # ... socket.request.max.bytes=104857600 # ... 12.7.1.5. Increasing bandwidth for high latency connections Kafka batches data to achieve reasonable throughput over high-latency connections from Kafka to clients, such as connections between datacenters. However, if high latency is a problem, you can increase the size of the buffers for sending and receiving messages. # ... socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576 # ... You can estimate the optimal size of your buffers using a bandwidth-delay product calculation, which multiplies the maximum bandwidth of the link (in bytes/s) with the round-trip delay (in seconds) to give an estimate of how large a buffer is required to sustain maximum throughput. 12.7.1.6. Managing logs with data retention policies Kafka uses logs to store message data. Logs are a series of segments associated with various indexes. New messages are written to an active segment, and never subsequently modified. Segments are read when serving fetch requests from consumers. Periodically, the active segment is rolled to become read-only and a new active segment is created to replace it. There is only a single segment active at a time. Older segments are retained until they are eligible for deletion. Configuration at the broker level sets the maximum size in bytes of a log segment and the amount of time in milliseconds before an active segment is rolled: # ... log.segment.bytes=1073741824 log.roll.ms=604800000 # ... You can override these settings at the topic level using segment.bytes and segment.ms . Whether you need to lower or raise these values depends on the policy for segment deletion. A larger size means the active segment contains more messages and is rolled less often. Segments also become eligible for deletion less often. You can set time-based or size-based log retention and cleanup policies so that logs are kept manageable. Depending on your requirements, you can use log retention configuration to delete old segments. If log retention policies are used, non-active log segments are removed when retention limits are reached. Deleting old segments bounds the storage space required for the log so you do not exceed disk capacity. For time-based log retention, you set a retention period based on hours, minutes and milliseconds. The retention period is based on the time messages were appended to the segment. The milliseconds configuration has priority over minutes, which has priority over hours. The minutes and milliseconds configuration is null by default, but the three options provide a substantial level of control over the data you wish to retain. Preference should be given to the milliseconds configuration, as it is the only one of the three properties that is dynamically updateable. # ... log.retention.ms=1680000 # ... If log.retention.ms is set to -1, no time limit is applied to log retention, so all logs are retained. Disk usage should always be monitored, but the -1 setting is not generally recommended as it can lead to issues with full disks, which can be hard to rectify. For size-based log retention, you set a maximum log size (of all segments in the log) in bytes: # ... log.retention.bytes=1073741824 # ... In other words, a log will typically have approximately log.retention.bytes/log.segment.bytes segments once it reaches a steady state. When the maximum log size is reached, older segments are removed. A potential issue with using a maximum log size is that it does not take into account the time messages were appended to a segment. You can use time-based and size-based log retention for your cleanup policy to get the balance you need. Whichever threshold is reached first triggers the cleanup. If you wish to add a time delay before a segment file is deleted from the system, you can add the delay using log.segment.delete.delay.ms for all topics at the broker level or file.delete.delay.ms for specific topics in the topic configuration. # ... log.segment.delete.delay.ms=60000 # ... 12.7.1.7. Removing log data with cleanup policies The method of removing older log data is determined by the log cleaner configuration. The log cleaner is enabled for the broker by default: # ... log.cleaner.enable=true # ... You can set the cleanup policy at the topic or broker level. Broker-level configuration is the default for topics that do not have policy set. You can set policy to delete logs, compact logs, or do both: # ... log.cleanup.policy=compact,delete # ... The delete policy corresponds to managing logs with data retention policies. It is suitable when data does not need to be retained forever. The compact policy guarantees to keep the most recent message for each message key. Log compaction is suitable where message values are changeable, and you want to retain the latest update. If cleanup policy is set to delete logs, older segments are deleted based on log retention limits. Otherwise, if the log cleaner is not enabled, and there are no log retention limits, the log will continue to grow. If cleanup policy is set for log compaction, the head of the log operates as a standard Kafka log, with writes for new messages appended in order. In the tail of a compacted log, where the log cleaner operates, records will be deleted if another record with the same key occurs later in the log. Messages with null values are also deleted. If you're not using keys, you can't use compaction because keys are needed to identify related messages. While Kafka guarantees that the latest messages for each key will be retained, it does not guarantee that the whole compacted log will not contain duplicates. Figure 12.1. Log showing key value writes with offset positions before compaction Using keys to identify messages, Kafka compaction keeps the latest message (with the highest offset) for a specific message key, eventually discarding earlier messages that have the same key. In other words, the message in its latest state is always available and any out-of-date records of that particular message are eventually removed when the log cleaner runs. You can restore a message back to a state. Records retain their original offsets even when surrounding records get deleted. Consequently, the tail can have non-contiguous offsets. When consuming an offset that's no longer available in the tail, the record with the higher offset is found. Figure 12.2. Log after compaction If you choose only a compact policy, your log can still become arbitrarily large. In which case, you can set policy to compact and delete logs. If you choose to compact and delete, first the log data is compacted, removing records with a key in the head of the log. After which, data that falls before the log retention threshold is deleted. Figure 12.3. Log retention point and compaction point You set the frequency the log is checked for cleanup in milliseconds: # ... log.retention.check.interval.ms=300000 # ... Adjust the log retention check interval in relation to the log retention settings. Smaller retention sizes might require more frequent checks. The frequency of cleanup should be often enough to manage the disk space, but not so often it affects performance on a topic. You can also set a time in milliseconds to put the cleaner on standby if there are no logs to clean: # ... log.cleaner.backoff.ms=15000 # ... If you choose to delete older log data, you can set a period in milliseconds to retain the deleted data before it is purged: # ... log.cleaner.delete.retention.ms=86400000 # ... The deleted data retention period gives time to notice the data is gone before it is irretrievably deleted. To delete all messages related to a specific key, a producer can send a tombstone message. A tombstone has a null value and acts as a marker to tell a consumer the value is deleted. After compaction, only the tombstone is retained, which must be for a long enough period for the consumer to know that the message is deleted. When older messages are deleted, having no value, the tombstone key is also deleted from the partition. 12.7.1.8. Managing disk utilization There are many other configuration settings related to log cleanup, but of particular importance is memory allocation. The deduplication property specifies the total memory for cleanup across all log cleaner threads. You can set an upper limit on the percentage of memory used through the buffer load factor. # ... log.cleaner.dedupe.buffer.size=134217728 log.cleaner.io.buffer.load.factor=0.9 # ... Each log entry uses exactly 24 bytes, so you can work out how many log entries the buffer can handle in a single run and adjust the setting accordingly. If possible, consider increasing the number of log cleaner threads if you are looking to reduce the log cleaning time: # ... log.cleaner.threads=8 # ... If you are experiencing issues with 100% disk bandwidth usage, you can throttle the log cleaner I/O so that the sum of the read/write operations is less than a specified double value based on the capabilities of the disks performing the operations: # ... log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 # ... 12.7.1.9. Handling large message sizes The default batch size for messages is 1MB, which is optimal for maximum throughput in most use cases. Kafka can accommodate larger batches at a reduced throughput, assuming adequate disk capacity. Large message sizes are handled in four ways: Producer-side message compression writes compressed messages to the log. Reference-based messaging sends only a reference to data stored in some other system in the message's value. Inline messaging splits messages into chunks that use the same key, which are then combined on output using a stream-processor like Kafka Streams. Broker and producer/consumer client application configuration built to handle larger message sizes. The reference-based messaging and message compression options are recommended and cover most situations. With any of these options, care must be take to avoid introducing performance issues. Producer-side compression For producer configuration, you specify a compression.type , such as Gzip, which is then applied to batches of data generated by the producer. Using the broker configuration compression.type=producer , the broker retains whatever compression the producer used. Whenever producer and topic compression do not match, the broker has to compress batches again prior to appending them to the log, which impacts broker performance. Compression also adds additional processing overhead on the producer and decompression overhead on the consumer, but includes more data in a batch, so is often beneficial to throughput when message data compresses well. Combine producer-side compression with fine-tuning of the batch size to facilitate optimum throughput. Using metrics helps to gauge the average batch size needed. Reference-based messaging Reference-based messaging is useful for data replication when you do not know how big a message will be. The external data store must be fast, durable, and highly available for this configuration to work. Data is written to the data store and a reference to the data is returned. The producer sends a message containing the reference to Kafka. The consumer gets the reference from the message and uses it to fetch the data from the data store. Figure 12.4. Reference-based messaging flow As the message passing requires more trips, end-to-end latency will increase. Another significant drawback of this approach is there is no automatic clean up of the data in the external system when the Kafka message gets cleaned up. A hybrid approach would be to only send large messages to the data store and process standard-sized messages directly. Inline messaging Inline messaging is complex, but it does not have the overhead of depending on external systems like reference-based messaging. The producing client application has to serialize and then chunk the data if the message is too big. The producer then uses the Kafka ByteArraySerializer or similar to serialize each chunk again before sending it. The consumer tracks messages and buffers chunks until it has a complete message. The consuming client application receives the chunks, which are assembled before deserialization. Complete messages are delivered to the rest of the consuming application in order according to the offset of the first or last chunk for each set of chunked messages. Successful delivery of the complete message is checked against offset metadata to avoid duplicates during a rebalance. Figure 12.5. Inline messaging flow Inline messaging has a performance overhead on the consumer side because of the buffering required, particularly when handling a series of large messages in parallel. The chunks of large messages can become interleaved, so that it is not always possible to commit when all the chunks of a message have been consumed if the chunks of another large message in the buffer are incomplete. For this reason, the buffering is usually supported by persisting message chunks or by implementing commit logic. Configuration to handle larger messages If larger messages cannot be avoided, and to avoid blocks at any point of the message flow, you can increase message limits. To do this, configure message.max.bytes at the topic level to set the maximum record batch size for individual topics. If you set message.max.bytes at the broker level, larger messages are allowed for all topics. The broker will reject any message that is greater than the limit set with message.max.bytes . The buffer size for the producers ( max.request.size ) and consumers ( message.max.bytes ) must be able to accommodate the larger messages. 12.7.1.10. Controlling the log flush of message data Log flush properties control the periodic writes of cached message data to disk. The scheduler specifies the frequency of checks on the log cache in milliseconds: # ... log.flush.scheduler.interval.ms=2000 # ... You can control the frequency of the flush based on the maximum amount of time that a message is kept in-memory and the maximum number of messages in the log before writing to disk: # ... log.flush.interval.ms=50000 log.flush.interval.messages=100000 # ... The wait between flushes includes the time to make the check and the specified interval before the flush is carried out. Increasing the frequency of flushes can affect throughput. Generally, the recommendation is to not set explicit flush thresholds and let the operating system perform background flush using its default settings. Partition replication provides greater data durability than writes to any single disk as a failed broker can recover from its in-sync replicas. If you are using application flush management, setting lower flush thresholds might be appropriate if you are using faster disks. 12.7.1.11. Partition rebalancing for availability Partitions can be replicated across brokers for fault tolerance. For a given partition, one broker is elected leader and handles all produce requests (writes to the log). Partition followers on other brokers replicate the partition data of the partition leader for data reliability in the event of the leader failing. Followers do not normally serve clients, though rack configuration allows a consumer to consume messages from the closest replica when a Kafka cluster spans multiple datacenters. Followers operate only to replicate messages from the partition leader and allow recovery should the leader fail. Recovery requires an in-sync follower. Followers stay in sync by sending fetch requests to the leader, which returns messages to the follower in order. The follower is considered to be in sync if it has caught up with the most recently committed message on the leader. The leader checks this by looking at the last offset requested by the follower. An out-of-sync follower is usually not eligible as a leader should the current leader fail, unless unclean leader election is allowed . You can adjust the lag time before a follower is considered out of sync: # ... replica.lag.time.max.ms=30000 # ... Lag time puts an upper limit on the time to replicate a message to all in-sync replicas and how long a producer has to wait for an acknowledgment. If a follower fails to make a fetch request and catch up with the latest message within the specified lag time, it is removed from in-sync replicas. You can reduce the lag time to detect failed replicas sooner, but by doing so you might increase the number of followers that fall out of sync needlessly. The right lag time value depends on both network latency and broker disk bandwidth. When a leader partition is no longer available, one of the in-sync replicas is chosen as the new leader. The first broker in a partition's list of replicas is known as the preferred leader. By default, Kafka is enabled for automatic partition leader rebalancing based on a periodic check of leader distribution. That is, Kafka checks to see if the preferred leader is the current leader. A rebalance ensures that leaders are evenly distributed across brokers and brokers are not overloaded. You can use Cruise Control for AMQ Streams to figure out replica assignments to brokers that balance load evenly across the cluster. Its calculation takes into account the differing load experienced by leaders and followers. A failed leader affects the balance of a Kafka cluster because the remaining brokers get the extra work of leading additional partitions. For the assignment found by Cruise Control to actually be balanced it is necessary that partitions are lead by the preferred leader. Kafka can automatically ensure that the preferred leader is being used (where possible), changing the current leader if necessary. This ensures that the cluster remains in the balanced state found by Cruise Control. You can control the frequency, in seconds, of the rebalance check and the maximum percentage of imbalance allowed for a broker before a rebalance is triggered. #... auto.leader.rebalance.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 #... The percentage leader imbalance for a broker is the ratio between the current number of partitions for which the broker is the current leader and the number of partitions for which it is the preferred leader. You can set the percentage to zero to ensure that preferred leaders are always elected, assuming they are in sync. If the checks for rebalances need more control, you can disable automated rebalances. You can then choose when to trigger a rebalance using the kafka-leader-election.sh command line tool. Note The Grafana dashboards provided with AMQ Streams show metrics for under-replicated partitions and partitions that do not have an active leader. 12.7.1.12. Unclean leader election Leader election to an in-sync replica is considered clean because it guarantees no loss of data. And this is what happens by default. But what if there is no in-sync replica to take on leadership? Perhaps the ISR (in-sync replica) only contained the leader when the leader's disk died. If a minimum number of in-sync replicas is not set, and there are no followers in sync with the partition leader when its hard drive fails irrevocably, data is already lost. Not only that, but a new leader cannot be elected because there are no in-sync followers. You can configure how Kafka handles leader failure: # ... unclean.leader.election.enable=false # ... Unclean leader election is disabled by default, which means that out-of-sync replicas cannot become leaders. With clean leader election, if no other broker was in the ISR when the old leader was lost, Kafka waits until that leader is back online before messages can be written or read. Unclean leader election means out-of-sync replicas can become leaders, but you risk losing messages. The choice you make depends on whether your requirements favor availability or durability. You can override the default configuration for specific topics at the topic level. If you cannot afford the risk of data loss, then leave the default configuration. 12.7.1.13. Avoiding unnecessary consumer group rebalances For consumers joining a new consumer group, you can add a delay so that unnecessary rebalances to the broker are avoided: # ... group.initial.rebalance.delay.ms=3000 # ... The delay is the amount of time that the coordinator waits for members to join. The longer the delay, the more likely it is that all the members will join in time and avoid a rebalance. But the delay also prevents the group from consuming until the period has ended. Additional resources Setting limits on brokers using the Kafka Static Quota plugin 12.7.2. Kafka producer configuration tuning Use a basic producer configuration with optional properties that are tailored to specific use cases. Adjusting your configuration to maximize throughput might increase latency or vice versa. You will need to experiment and tune your producer configuration to get the balance you need. 12.7.2.1. Basic producer configuration Connection and serializer properties are required for every producer. Generally, it is good practice to add a client id for tracking, and use compression on the producer to reduce batch sizes in requests. In a basic producer configuration: The order of messages in a partition is not guaranteed. The acknowledgment of messages reaching the broker does not guarantee durability. Basic producer configuration properties # ... bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5 # ... 1 (Required) Tells the producer to connect to a Kafka cluster using a host:port bootstrap server address for a Kafka broker. The producer uses the address to discover and connect to all brokers in the cluster. Use a comma-separated list to specify two or three addresses in case a server is down, but it's not necessary to provide a list of all the brokers in the cluster. 2 (Required) Serializer to transform the key of each message to bytes prior to them being sent to a broker. 3 (Required) Serializer to transform the value of each message to bytes prior to them being sent to a broker. 4 (Optional) The logical name for the client, which is used in logs and metrics to identify the source of a request. 5 (Optional) The codec for compressing messages, which are sent and might be stored in compressed format and then decompressed when reaching a consumer. Compression is useful for improving throughput and reducing the load on storage, but might not be suitable for low latency applications where the cost of compression or decompression could be prohibitive. 12.7.2.2. Data durability You can apply greater data durability, to minimize the likelihood that messages are lost, using message delivery acknowledgments. # ... acks=all 1 # ... 1 Specifying acks=all forces a partition leader to replicate messages to a certain number of followers before acknowledging that the message request was successfully received. Because of the additional checks, acks=all increases the latency between the producer sending a message and receiving acknowledgment. The number of brokers which need to have appended the messages to their logs before the acknowledgment is sent to the producer is determined by the topic's min.insync.replicas configuration. A typical starting point is to have a topic replication factor of 3, with two in-sync replicas on other brokers. In this configuration, the producer can continue unaffected if a single broker is unavailable. If a second broker becomes unavailable, the producer won't receive acknowledgments and won't be able to produce more messages. Topic configuration to support acks=all # ... min.insync.replicas=2 1 # ... 1 Use 2 in-sync replicas. The default is 1 . Note If the system fails, there is a risk of unsent data in the buffer being lost. 12.7.2.3. Ordered delivery Idempotent producers avoid duplicates as messages are delivered exactly once. IDs and sequence numbers are assigned to messages to ensure the order of delivery, even in the event of failure. If you are using acks=all for data consistency, enabling idempotency makes sense for ordered delivery. Ordered delivery with idempotency # ... enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4 # ... 1 Set to true to enable the idempotent producer. 2 With idempotent delivery the number of in-flight requests may be greater than 1 while still providing the message ordering guarantee. The default is 5 in-flight requests. 3 Set acks to all . 4 Set the number of attempts to resend a failed message request. If you are not using acks=all and idempotency because of the performance cost, set the number of in-flight (unacknowledged) requests to 1 to preserve ordering. Otherwise, a situation is possible where Message-A fails only to succeed after Message-B was already written to the broker. Ordered delivery without idempotency # ... enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647 # ... 1 Set to false to disable the idempotent producer. 2 Set the number of in-flight requests to exactly 1 . 12.7.2.4. Reliability guarantees Idempotence is useful for exactly once writes to a single partition. Transactions, when used with idempotence, allow exactly once writes across multiple partitions. Transactions guarantee that messages using the same transactional ID are produced once, and either all are successfully written to the respective logs or none of them are. # ... enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2 # ... 1 Specify a unique transactional ID. 2 Set the maximum allowed time for transactions in milliseconds before a timeout error is returned. The default is 900000 or 15 minutes. The choice of transactional.id is important in order that the transactional guarantee is maintained. Each transactional id should be used for a unique set of topic partitions. For example, this can be achieved using an external mapping of topic partition names to transactional ids, or by computing the transactional id from the topic partition names using a function that avoids collisions. 12.7.2.5. Optimizing throughput and latency Usually, the requirement of a system is to satisfy a particular throughput target for a proportion of messages within a given latency. For example, targeting 500,000 messages per second with 95% of messages being acknowledged within 2 seconds. It's likely that the messaging semantics (message ordering and durability) of your producer are defined by the requirements for your application. For instance, it's possible that you don't have the option of using acks=0 or acks=1 without breaking some important property or guarantee provided by your application. Broker restarts have a significant impact on high percentile statistics. For example, over a long period the 99th percentile latency is dominated by behavior around broker restarts. This is worth considering when designing benchmarks or comparing performance numbers from benchmarking with performance numbers seen in production. Depending on your objective, Kafka offers a number of configuration parameters and techniques for tuning producer performance for throughput and latency. Message batching ( linger.ms and batch.size ) Message batching delays sending messages in the hope that more messages destined for the same broker will be sent, allowing them to be batched into a single produce request. Batching is a compromise between higher latency in return for higher throughput. Time-based batching is configured using linger.ms , and size-based batching is configured using batch.size . Compression ( compression.type ) Message compression adds latency in the producer (CPU time spent compressing the messages), but makes requests (and potentially disk writes) smaller, which can increase throughput. Whether compression is worthwhile, and the best compression to use, will depend on the messages being sent. Compression happens on the thread which calls KafkaProducer.send() , so if the latency of this method matters for your application you should consider using more threads. Pipelining ( max.in.flight.requests.per.connection ) Pipelining means sending more requests before the response to a request has been received. In general more pipelining means better throughput, up to a threshold at which other effects, such as worse batching, start to counteract the effect on throughput. Lowering latency When your application calls KafkaProducer.send() the messages are: Processed by any interceptors Serialized Assigned to a partition Compressed Added to a batch of messages in a per-partition queue At which point the send() method returns. So the time send() is blocked is determined by: The time spent in the interceptors, serializers and partitioner The compression algorithm used The time spent waiting for a buffer to use for compression Batches will remain in the queue until one of the following occurs: The batch is full (according to batch.size ) The delay introduced by linger.ms has passed The sender is about to send message batches for other partitions to the same broker, and it is possible to add this batch too The producer is being flushed or closed Look at the configuration for batching and buffering to mitigate the impact of send() blocking on latency. # ... linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3 # ... 1 The linger property adds a delay in milliseconds so that larger batches of messages are accumulated and sent in a request. The default is 0'. 2 If a maximum batch.size in bytes is used, a request is sent when the maximum is reached, or messages have been queued for longer than linger.ms (whichever comes sooner). Adding the delay allows batches to accumulate messages up to the batch size. 3 The buffer size must be at least as big as the batch size, and be able to accommodate buffering, compression and in-flight requests. Increasing throughput Improve throughput of your message requests by adjusting the maximum time to wait before a message is delivered and completes a send request. You can also direct messages to a specified partition by writing a custom partitioner to replace the default. # ... delivery.timeout.ms=120000 1 partitioner.class=my-custom-partitioner 2 # ... 1 The maximum time in milliseconds to wait for a complete send request. You can set the value to MAX_LONG to delegate to Kafka an indefinite number of retries. The default is 120000 or 2 minutes. 2 Specify the class name of the custom partitioner. 12.7.3. Kafka consumer configuration tuning Use a basic consumer configuration with optional properties that are tailored to specific use cases. When tuning your consumers your primary concern will be ensuring that they cope efficiently with the amount of data ingested. As with the producer tuning, be prepared to make incremental changes until the consumers operate as expected. 12.7.3.1. Basic consumer configuration Connection and deserializer properties are required for every consumer. Generally, it is good practice to add a client id for tracking. In a consumer configuration, irrespective of any subsequent configuration: The consumer fetches from a given offset and consumes the messages in order, unless the offset is changed to skip or re-read messages. The broker does not know if the consumer processed the responses, even when committing offsets to Kafka, because the offsets might be sent to a different broker in the cluster. Basic consumer configuration properties # ... bootstrap.servers=localhost:9092 1 key.deserializer=org.apache.kafka.common.serialization.StringDeserializer 2 value.deserializer=org.apache.kafka.common.serialization.StringDeserializer 3 client.id=my-client 4 group.id=my-group-id 5 # ... 1 (Required) Tells the consumer to connect to a Kafka cluster using a host:port bootstrap server address for a Kafka broker. The consumer uses the address to discover and connect to all brokers in the cluster. Use a comma-separated list to specify two or three addresses in case a server is down, but it is not necessary to provide a list of all the brokers in the cluster. If you are using a loadbalancer service to expose the Kafka cluster, you only need the address for the service because the availability is handled by the loadbalancer. 2 (Required) Deserializer to transform the bytes fetched from the Kafka broker into message keys. 3 (Required) Deserializer to transform the bytes fetched from the Kafka broker into message values. 4 (Optional) The logical name for the client, which is used in logs and metrics to identify the source of a request. The id can also be used to throttle consumers based on processing time quotas. 5 (Conditional) A group id is required for a consumer to be able to join a consumer group. 12.7.3.2. Scaling data consumption using consumer groups Consumer groups share a typically large data stream generated by one or multiple producers from a given topic. Consumers are grouped using a group.id property, allowing messages to be spread across the members. One of the consumers in the group is elected leader and decides how the partitions are assigned to the consumers in the group. Each partition can only be assigned to a single consumer. If you do not already have as many consumers as partitions, you can scale data consumption by adding more consumer instances with the same group.id . Adding more consumers to a group than there are partitions will not help throughput, but it does mean that there are consumers on standby should one stop functioning. If you can meet throughput goals with fewer consumers, you save on resources. Consumers within the same consumer group send offset commits and heartbeats to the same broker. So the greater the number of consumers in the group, the higher the request load on the broker. # ... group.id=my-group-id 1 # ... 1 Add a consumer to a consumer group using a group id. 12.7.3.3. Message ordering guarantees Kafka brokers receive fetch requests from consumers that ask the broker to send messages from a list of topics, partitions and offset positions. A consumer observes messages in a single partition in the same order that they were committed to the broker, which means that Kafka only provides ordering guarantees for messages in a single partition. Conversely, if a consumer is consuming messages from multiple partitions, the order of messages in different partitions as observed by the consumer does not necessarily reflect the order in which they were sent. If you want a strict ordering of messages from one topic, use one partition per consumer. 12.7.3.4. Optimizing throughput and latency Control the number of messages returned when your client application calls KafkaConsumer.poll() . Use the fetch.max.wait.ms and fetch.min.bytes properties to increase the minimum amount of data fetched by the consumer from the Kafka broker. Time-based batching is configured using fetch.max.wait.ms , and size-based batching is configured using fetch.min.bytes . If CPU utilization in the consumer or broker is high, it might be because there are too many requests from the consumer. You can adjust fetch.max.wait.ms and fetch.min.bytes properties higher so that there are fewer requests and messages are delivered in bigger batches. By adjusting higher, throughput is improved with some cost to latency. You can also adjust higher if the amount of data being produced is low. For example, if you set fetch.max.wait.ms to 500ms and fetch.min.bytes to 16384 bytes, when Kafka receives a fetch request from the consumer it will respond when the first of either threshold is reached. Conversely, you can adjust the fetch.max.wait.ms and fetch.min.bytes properties lower to improve end-to-end latency. # ... fetch.max.wait.ms=500 1 fetch.min.bytes=16384 2 # ... 1 The maximum time in milliseconds the broker will wait before completing fetch requests. The default is 500 milliseconds. 2 If a minimum batch size in bytes is used, a request is sent when the minimum is reached, or messages have been queued for longer than fetch.max.wait.ms (whichever comes sooner). Adding the delay allows batches to accumulate messages up to the batch size. Lowering latency by increasing the fetch request size Use the fetch.max.bytes and max.partition.fetch.bytes properties to increase the maximum amount of data fetched by the consumer from the Kafka broker. The fetch.max.bytes property sets a maximum limit in bytes on the amount of data fetched from the broker at one time. The max.partition.fetch.bytes sets a maximum limit in bytes on how much data is returned for each partition, which must always be larger than the number of bytes set in the broker or topic configuration for max.message.bytes . The maximum amount of memory a client can consume is calculated approximately as: NUMBER-OF-BROKERS * fetch.max.bytes and NUMBER-OF-PARTITIONS * max.partition.fetch.bytes If memory usage can accommodate it, you can increase the values of these two properties. By allowing more data in each request, latency is improved as there are fewer fetch requests. # ... fetch.max.bytes=52428800 1 max.partition.fetch.bytes=1048576 2 # ... 1 The maximum amount of data in bytes returned for a fetch request. 2 The maximum amount of data in bytes returned for each partition. 12.7.3.5. Avoiding data loss or duplication when committing offsets The Kafka auto-commit mechanism allows a consumer to commit the offsets of messages automatically. If enabled, the consumer will commit offsets received from polling the broker at 5000ms intervals. The auto-commit mechanism is convenient, but it introduces a risk of data loss and duplication. If a consumer has fetched and transformed a number of messages, but the system crashes with processed messages in the consumer buffer when performing an auto-commit, that data is lost. If the system crashes after processing the messages, but before performing the auto-commit, the data is duplicated on another consumer instance after rebalancing. Auto-committing can avoid data loss only when all messages are processed before the poll to the broker, or the consumer closes. To minimize the likelihood of data loss or duplication, you can set enable.auto.commit to false and develop your client application to have more control over committing offsets. Or you can use auto.commit.interval.ms to decrease the intervals between commits. # ... enable.auto.commit=false 1 # ... 1 Auto commit is set to false to provide more control over committing offsets. By setting to enable.auto.commit to false , you can commit offsets after all processing has been performed and the message has been consumed. For example, you can set up your application to call the Kafka commitSync and commitAsync commit APIs. The commitSync API commits the offsets in a message batch returned from polling. You call the API when you are finished processing all the messages in the batch. If you use the commitSync API, the application will not poll for new messages until the last offset in the batch is committed. If this negatively affects throughput, you can commit less frequently, or you can use the commitAsync API. The commitAsync API does not wait for the broker to respond to a commit request, but risks creating more duplicates when rebalancing. A common approach is to combine both commit APIs in an application, with the commitSync API used just before shutting the consumer down or rebalancing to make sure the final commit is successful. 12.7.3.5.1. Controlling transactional messages Consider using transactional ids and enabling idempotence ( enable.idempotence=true ) on the producer side to guarantee exactly-once delivery. On the consumer side, you can then use the isolation.level property to control how transactional messages are read by the consumer. The isolation.level property has two valid values: read_committed read_uncommitted (default) Use read_committed to ensure that only transactional messages that have been committed are read by the consumer. However, this will cause an increase in end-to-end latency, because the consumer will not be able to return a message until the brokers have written the transaction markers that record the result of the transaction ( committed or aborted ). # ... enable.auto.commit=false isolation.level=read_committed 1 # ... 1 Set to read_committed so that only committed messages are read by the consumer. 12.7.3.6. Recovering from failure to avoid data loss Use the session.timeout.ms and heartbeat.interval.ms properties to configure the time taken to check and recover from consumer failure within a consumer group. The session.timeout.ms property specifies the maximum amount of time in milliseconds a consumer within a consumer group can be out of contact with a broker before being considered inactive and a rebalancing is triggered between the active consumers in the group. When the group rebalances, the partitions are reassigned to the members of the group. The heartbeat.interval.ms property specifies the interval in milliseconds between heartbeat checks to the consumer group coordinator to indicate that the consumer is active and connected. The heartbeat interval must be lower, usually by a third, than the session timeout interval. If you set the session.timeout.ms property lower, failing consumers are detected earlier, and rebalancing can take place quicker. However, take care not to set the timeout so low that the broker fails to receive a heartbeat in time and triggers an unnecessary rebalance. Decreasing the heartbeat interval reduces the chance of accidental rebalancing, but more frequent heartbeats increases the overhead on broker resources. 12.7.3.7. Managing offset policy Use the auto.offset.reset property to control how a consumer behaves when no offsets have been committed, or a committed offset is no longer valid or deleted. Suppose you deploy a consumer application for the first time, and it reads messages from an existing topic. Because this is the first time the group.id is used, the __consumer_offsets topic does not contain any offset information for this application. The new application can start processing all existing messages from the start of the log or only new messages. The default reset value is latest , which starts at the end of the partition, and consequently means some messages are missed. To avoid data loss, but increase the amount of processing, set auto.offset.reset to earliest to start at the beginning of the partition. Also consider using the earliest option to avoid messages being lost when the offsets retention period ( offsets.retention.minutes ) configured for a broker has ended. If a consumer group or standalone consumer is inactive and commits no offsets during the retention period, previously committed offsets are deleted from __consumer_offsets . # ... heartbeat.interval.ms=3000 1 session.timeout.ms=10000 2 auto.offset.reset=earliest 3 # ... 1 Adjust the heartbeat interval lower according to anticipated rebalances. 2 If no heartbeats are received by the Kafka broker before the timeout duration expires, the consumer is removed from the consumer group and a rebalance is initiated. If the broker configuration has a group.min.session.timeout.ms and group.max.session.timeout.ms , the session timeout value must be within that range. 3 Set to earliest to return to the start of a partition and avoid data loss if offsets were not committed. If the amount of data returned in a single fetch request is large, a timeout might occur before the consumer has processed it. In this case, you can lower max.partition.fetch.bytes or increase session.timeout.ms . 12.7.3.8. Minimizing the impact of rebalances The rebalancing of a partition between active consumers in a group is the time it takes for: Consumers to commit their offsets The new consumer group to be formed The group leader to assign partitions to group members The consumers in the group to receive their assignments and start fetching Clearly, the process increases the downtime of a service, particularly when it happens repeatedly during a rolling restart of a consumer group cluster. In this situation, you can use the concept of static membership to reduce the number of rebalances. Rebalancing assigns topic partitions evenly among consumer group members. Static membership uses persistence so that a consumer instance is recognized during a restart after a session timeout. The consumer group coordinator can identify a new consumer instance using a unique id that is specified using the group.instance.id property. During a restart, the consumer is assigned a new member id, but as a static member it continues with the same instance id, and the same assignment of topic partitions is made. If the consumer application does not make a call to poll at least every max.poll.interval.ms milliseconds, the consumer is considered to be failed, causing a rebalance. If the application cannot process all the records returned from poll in time, you can avoid a rebalance by using the max.poll.interval.ms property to specify the interval in milliseconds between polls for new messages from a consumer. Or you can use the max.poll.records property to set a maximum limit on the number of records returned from the consumer buffer, allowing your application to process fewer records within the max.poll.interval.ms limit. # ... group.instance.id=_UNIQUE-ID_ 1 max.poll.interval.ms=300000 2 max.poll.records=500 3 # ... 1 The unique instance id ensures that a new consumer instance receives the same assignment of topic partitions. 2 Set the interval to check the consumer is continuing to process messages. 3 Sets the number of processed records returned from the consumer. 12.8. Uninstalling AMQ Streams This procedure describes how to uninstall AMQ Streams and remove resources related to the deployment. Prerequisites In order to perform this procedure, identify resources created specifically for a deployment and referenced from the AMQ Streams resource. Such resources include: Secrets (Custom CAs and certificates, Kafka Connect secrets, and other Kafka secrets) Logging ConfigMaps (of type external ) These are resources referenced by Kafka , KafkaConnect , KafkaConnectS2I , KafkaMirrorMaker , or KafkaBridge configuration. Procedure Delete the Cluster Operator Deployment , related CustomResourceDefinitions , and RBAC resources: Warning Deleting CustomResourceDefinitions results in the garbage collection of the corresponding custom resources ( Kafka , KafkaConnect , KafkaConnectS2I , KafkaMirrorMaker , or KafkaBridge ) and the resources dependent on them (Deployments, StatefulSets, and other dependent resources). Delete the resources you identified in the prerequisites. 12.9. Frequently asked questions 12.9.1. Questions related to the Cluster Operator 12.9.1.1. Why do I need cluster administrator privileges to install AMQ Streams? To install AMQ Streams, you need to be able to create the following cluster-scoped resources: Custom Resource Definitions (CRDs) to instruct OpenShift about resources that are specific to AMQ Streams, such as Kafka and KafkaConnect ClusterRoles and ClusterRoleBindings Cluster-scoped resources, which are not scoped to a particular OpenShift namespace, typically require cluster administrator privileges to install. As a cluster administrator, you can inspect all the resources being installed (in the /install/ directory) to ensure that the ClusterRoles do not grant unnecessary privileges. After installation, the Cluster Operator runs as a regular Deployment , so any standard (non-admin) OpenShift user with privileges to access the Deployment can configure it. The cluster administrator can grant standard users the privileges necessary to manage Kafka custom resources. See also: Why does the Cluster Operator need to create ClusterRoleBindings ? Can standard OpenShift users create Kafka custom resources? 12.9.1.2. Why does the Cluster Operator need to create ClusterRoleBindings ? OpenShift has built-in privilege escalation prevention , which means that the Cluster Operator cannot grant privileges it does not have itself, specifically, it cannot grant such privileges in a namespace it cannot access. Therefore, the Cluster Operator must have the privileges necessary for all the components it orchestrates. The Cluster Operator needs to be able to grant access so that: The Topic Operator can manage KafkaTopics , by creating Roles and RoleBindings in the namespace that the operator runs in The User Operator can manage KafkaUsers , by creating Roles and RoleBindings in the namespace that the operator runs in The failure domain of a Node is discovered by AMQ Streams, by creating a ClusterRoleBinding When using rack-aware partition assignment, the broker pod needs to be able to get information about the Node it is running on, for example, the Availability Zone in Amazon AWS. A Node is a cluster-scoped resource, so access to it can only be granted through a ClusterRoleBinding , not a namespace-scoped RoleBinding . 12.9.1.3. Can standard OpenShift users create Kafka custom resources? By default, standard OpenShift users will not have the privileges necessary to manage the custom resources handled by the Cluster Operator. The cluster administrator can grant a user the necessary privileges using OpenShift RBAC resources. For more information, see Designating AMQ Streams administrators in the Deploying and Upgrading AMQ Streams on OpenShift guide. 12.9.1.4. What do the failed to acquire lock warnings in the log mean? For each cluster, the Cluster Operator executes only one operation at a time. The Cluster Operator uses locks to make sure that there are never two parallel operations running for the same cluster. Other operations must wait until the current operation completes before the lock is released. INFO Examples of cluster operations include cluster creation , rolling update , scale down , and scale up . If the waiting time for the lock takes too long, the operation times out and the following warning message is printed to the log: 2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster Depending on the exact configuration of STRIMZI_FULL_RECONCILIATION_INTERVAL_MS and STRIMZI_OPERATION_TIMEOUT_MS , this warning message might appear occasionally without indicating any underlying issues. Operations that time out are picked up in the periodic reconciliation, so that the operation can acquire the lock and execute again. Should this message appear periodically, even in situations when there should be no other operations running for a given cluster, it might indicate that the lock was not properly released due to an error. If this is the case, try restarting the Cluster Operator. 12.9.1.5. Why is hostname verification failing when connecting to NodePorts using TLS? Currently, off-cluster access using NodePorts with TLS encryption enabled does not support TLS hostname verification. As a result, the clients that verify the hostname will fail to connect. For example, the Java client will fail with the following exception: Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more To connect, you must disable hostname verification. In the Java client, you can do this by setting the configuration option ssl.endpoint.identification.algorithm to an empty string. When configuring the client using a properties file, you can do it this way: ssl.endpoint.identification.algorithm= When configuring the client directly in Java, set the configuration option to an empty string: props.put("ssl.endpoint.identification.algorithm", "");
[ "get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3", "get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple", "get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user", "delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io \"my-cluster\" deleted kafkatopic.kafka.strimzi.io \"kafka-apps\" deleted kafkauser.kafka.strimzi.io \"my-user\" deleted", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.type==\"tls\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-bootstrap.myproject.svc:9093", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.type==\"external\")].bootstrapServers}{\"\\n\"}' 192.168.1.247:9094", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # status: conditions: 1 - lastTransitionTime: 2021-07-23T23:46:57+0000 status: \"True\" type: Ready 2 observedGeneration: 4 3 listeners: 4 - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9092 type: plain - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9093 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- type: tls - addresses: - host: 172.29.49.180 port: 9094 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- type: external clusterId: CLUSTER-ID 5", "get kafka <kafka_resource_name> -o jsonpath='{.status}'", "annotate KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE strimzi.io/pause-reconciliation=\"true\"", "annotate KafkaConnect my-connect strimzi.io/pause-reconciliation=\"true\"", "describe KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: \"true\" strimzi.io/use-connector-resources: \"true\" creationTimestamp: 2021-03-12T10:47:11Z # spec: # status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: \"True\" type: ReconciliationPaused", "annotate statefulset cluster-name -kafka strimzi.io/manual-rolling-update=true annotate statefulset cluster-name -zookeeper strimzi.io/manual-rolling-update=true", "annotate pod cluster-name -kafka- index strimzi.io/manual-rolling-update=true annotate pod cluster-name -zookeeper- index strimzi.io/manual-rolling-update=true", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 9092, \"tls\" : false, \"protocol\" : \"kafka\", \"auth\" : \"scram-sha-512\" }, { \"port\" : 9093, \"tls\" : true, \"protocol\" : \"kafka\", \"auth\" : \"tls\" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: \"true\" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 8080, \"tls\" : false, \"auth\" : \"none\", \"protocol\" : \"http\" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: \"true\" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service", "get service -l strimzi.io/discovery=true", "apiVersion: v1 kind: PersistentVolume spec: # persistentVolumeReclaimPolicy: Retain", "apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # reclaimPolicy: Retain", "apiVersion: v1 kind: PersistentVolume spec: # storageClassName: gp2-retain", "get pv", "NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2", "create namespace myproject", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c", "apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: \"yes\" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: \"<date>\" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: \"39431\" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem", "claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea", "create -f install/cluster-operator -n my-project", "apply -f kafka.yaml", "run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:1.8.4 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} 1 #", "get KafkaTopic", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6", "apply -f KAFKA-CONFIG-FILE", "num.partitions=1 default.replication.factor=3 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 num.network.threads=3 num.io.threads=8 num.recovery.threads.per.data.dir=1 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 group.initial.rebalance.delay.ms=0 zookeeper.connection.timeout.ms=6000", "num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 replica.fetch.max.bytes=1048576", "auto.create.topics.enable=false delete.topic.enable=true", "transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2", "offsets.topic.num.partitions=50 offsets.topic.replication.factor=3", "num.network.threads=3 1 queued.max.requests=500 2 num.io.threads=8 3 num.recovery.threads.per.data.dir=1 4", "replica.socket.receive.buffer.bytes=65536", "socket.request.max.bytes=104857600", "socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576", "log.segment.bytes=1073741824 log.roll.ms=604800000", "log.retention.ms=1680000", "log.retention.bytes=1073741824", "log.segment.delete.delay.ms=60000", "log.cleaner.enable=true", "log.cleanup.policy=compact,delete", "log.retention.check.interval.ms=300000", "log.cleaner.backoff.ms=15000", "log.cleaner.delete.retention.ms=86400000", "log.cleaner.dedupe.buffer.size=134217728 log.cleaner.io.buffer.load.factor=0.9", "log.cleaner.threads=8", "log.cleaner.io.max.bytes.per.second=1.7976931348623157E308", "log.flush.scheduler.interval.ms=2000", "log.flush.interval.ms=50000 log.flush.interval.messages=100000", "replica.lag.time.max.ms=30000", "# auto.leader.rebalance.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 #", "unclean.leader.election.enable=false", "group.initial.rebalance.delay.ms=3000", "bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5", "acks=all 1", "min.insync.replicas=2 1", "enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4", "enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647", "enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2", "linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3", "delivery.timeout.ms=120000 1 partitioner.class=my-custom-partitioner 2", "bootstrap.servers=localhost:9092 1 key.deserializer=org.apache.kafka.common.serialization.StringDeserializer 2 value.deserializer=org.apache.kafka.common.serialization.StringDeserializer 3 client.id=my-client 4 group.id=my-group-id 5", "group.id=my-group-id 1", "fetch.max.wait.ms=500 1 fetch.min.bytes=16384 2", "NUMBER-OF-BROKERS * fetch.max.bytes and NUMBER-OF-PARTITIONS * max.partition.fetch.bytes", "fetch.max.bytes=52428800 1 max.partition.fetch.bytes=1048576 2", "enable.auto.commit=false 1", "enable.auto.commit=false isolation.level=read_committed 1", "heartbeat.interval.ms=3000 1 session.timeout.ms=10000 2 auto.offset.reset=earliest 3", "group.instance.id=_UNIQUE-ID_ 1 max.poll.interval.ms=300000 2 max.poll.records=500 3", "delete -f install/cluster-operator", "2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster", "Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more", "ssl.endpoint.identification.algorithm=", "props.put(\"ssl.endpoint.identification.algorithm\", \"\");" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_openshift/management-tasks-str
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/manage_secrets_with_openstack_key_manager/making-open-source-more-inclusive
Installing on vSphere
Installing on vSphere OpenShift Container Platform 4.14 Installing OpenShift Container Platform on vSphere Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_vsphere/index
Chapter 12. Configuring TLS security profiles
Chapter 12. Configuring TLS security profiles TLS security profiles provide a way for servers to regulate which ciphers a client can use when connecting to the server. This ensures that OpenShift Container Platform components use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms. Cluster administrators can choose which TLS security profile to use for each of the following components: the Ingress Controller the control plane This includes the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, OpenShift API server, OpenShift OAuth API server, OpenShift OAuth server, and etcd. the kubelet, when it acts as an HTTP server for the Kubernetes API server 12.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 12.1. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 12.2. Viewing TLS security profile details You can view the minimum TLS version and ciphers for the predefined TLS security profiles for each of the following components: Ingress Controller, control plane, and kubelet. Important The effective configuration of minimum TLS version and list of ciphers for a profile might differ between components. Procedure View details for a specific TLS security profile: USD oc explain <component>.spec.tlsSecurityProfile.<profile> 1 1 For <component> , specify ingresscontroller , apiserver , or kubeletconfig . For <profile> , specify old , intermediate , or custom . For example, to check the ciphers included for the intermediate profile for the control plane: USD oc explain apiserver.spec.tlsSecurityProfile.intermediate Example output KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2 View all details for the tlsSecurityProfile field of a component: USD oc explain <component>.spec.tlsSecurityProfile 1 1 For <component> , specify ingresscontroller , apiserver , or kubeletconfig . For example, to check all details for the tlsSecurityProfile field for the Ingress Controller: USD oc explain ingresscontroller.spec.tlsSecurityProfile Example output KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: ... FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string> ... 1 Lists ciphers and minimum version for the intermediate profile here. 2 Lists ciphers and minimum version for the modern profile here. 3 Lists ciphers and minimum version for the old profile here. 12.3. Configuring the TLS security profile for the Ingress Controller To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server. Sample IngressController CR that configures the Old TLS security profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers. You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters. Note The HAProxy Ingress Controller image supports TLS 1.3 and the Modern profile. The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.tlsSecurityProfile field: Sample IngressController CR for a Custom profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the profile is set in the IngressController CR: USD oc describe IngressController default -n openshift-ingress-operator Example output Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... 12.4. Configuring the TLS security profile for the control plane To configure a TLS security profile for the control plane, edit the APIServer custom resource (CR) to specify a predefined or custom TLS security profile. Setting the TLS security profile in the APIServer CR propagates the setting to the following control plane components: Kubernetes API server Kubernetes controller manager Kubernetes scheduler OpenShift API server OpenShift OAuth API server OpenShift OAuth server etcd If a TLS security profile is not configured, the default TLS security profile is Intermediate . Note The default TLS security profile for the Ingress Controller is based on the TLS security profile set for the API server. Sample APIServer CR that configures the Old TLS security profile apiVersion: config.openshift.io/v1 kind: APIServer ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers required to communicate with the control plane components. You can see the configured TLS security profile in the APIServer custom resource (CR) under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed. Note The control plane does not support TLS 1.3 as the minimum TLS version; the Modern profile is not supported because it requires TLS 1.3 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the default APIServer CR to configure the TLS security profile: USD oc edit APIServer cluster Add the spec.tlsSecurityProfile field: Sample APIServer CR for a Custom profile apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the TLS security profile is set in the APIServer CR: USD oc describe apiserver cluster Example output Name: cluster Namespace: ... API Version: config.openshift.io/v1 Kind: APIServer ... Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... Verify that the TLS security profile is set in the etcd CR: USD oc describe etcd cluster Example output Name: cluster Namespace: ... API Version: operator.openshift.io/v1 Kind: Etcd ... Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12 ... 12.5. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig # ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" # ... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #...
[ "oc explain <component>.spec.tlsSecurityProfile.<profile> 1", "oc explain apiserver.spec.tlsSecurityProfile.intermediate", "KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2", "oc explain <component>.spec.tlsSecurityProfile 1", "oc explain ingresscontroller.spec.tlsSecurityProfile", "KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string>", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old", "oc edit IngressController default -n openshift-ingress-operator", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe IngressController default -n openshift-ingress-operator", "Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "apiVersion: config.openshift.io/v1 kind: APIServer spec: tlsSecurityProfile: old: {} type: Old", "oc edit APIServer cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe apiserver cluster", "Name: cluster Namespace: API Version: config.openshift.io/v1 Kind: APIServer Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "oc describe etcd cluster", "Name: cluster Namespace: API Version: operator.openshift.io/v1 Kind: Etcd Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/tls-security-profiles
Chapter 364. Vert.x Component
Chapter 364. Vert.x Component Available as of Camel version 2.12 The vertx component is for working with the Vertx EventBus . The vertx EventBus sends and receives JSON events. INFO:From Camel 2.16 onwards vertx 3 is in use which requires Java 1.8 at runtime. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-vertx</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 364.1. URI format vertx:channelName[?options] 364.2. Options The Vert.x component supports 7 options, which are listed below. Name Description Default Type vertxFactory (advanced) To use a custom VertxFactory implementation VertxFactory host (common) Hostname for creating an embedded clustered EventBus String port (common) Port for creating an embedded clustered EventBus int vertxOptions (common) Options to use for creating vertx VertxOptions vertx (common) To use the given vertx EventBus instead of creating a new embedded EventBus Vertx timeout (common) Timeout in seconds to wait for clustered Vertx EventBus to be ready. The default value is 60. 60 int resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Vert.x endpoint is configured using URI syntax: with the following path and query parameters: 364.2.1. Path Parameters (1 parameters): Name Description Default Type address Required Sets the event bus address used to communicate String 364.2.2. Query Parameters (5 parameters): Name Description Default Type pubSub (common) Whether to use publish/subscribe instead of point to point when sending to a vertx endpoint. Boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 364.3. Spring Boot Auto-Configuration The component supports 8 options, which are listed below. Name Description Default Type camel.component.vertx.enabled Enable vertx component true Boolean camel.component.vertx.host Hostname for creating an embedded clustered EventBus String camel.component.vertx.port Port for creating an embedded clustered EventBus Integer camel.component.vertx.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.vertx.timeout Timeout in seconds to wait for clustered Vertx EventBus to be ready. The default value is 60. 60 Integer camel.component.vertx.vertx To use the given vertx EventBus instead of creating a new embedded EventBus. The option is a io.vertx.core.Vertx type. String camel.component.vertx.vertx-factory To use a custom VertxFactory implementation. The option is a io.vertx.core.spi.VertxFactory type. String camel.component.vertx.vertx-options Options to use for creating vertx. The option is a io.vertx.core.VertxOptions type. String Camel 2.12.3: Whether to use publish/subscribe instead of point to point when sending to a vertx endpoint. 364.4. Connecting to the existing Vert.x instance If you would like to connect to the Vert.x instance already existing in your JVM, you can set the instance on the component level: Vertx vertx = ...; VertxComponent vertxComponent = new VertxComponent(); vertxComponent.setVertx(vertx); camelContext.addComponent("vertx", vertxComponent); 364.5. See Also Configuring Camel Component Endpoint Getting Started
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-vertx</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "vertx:channelName[?options]", "vertx:address", "You can append query options to the URI in the following format, ?option=value&option=value&", "Vertx vertx = ...; VertxComponent vertxComponent = new VertxComponent(); vertxComponent.setVertx(vertx); camelContext.addComponent(\"vertx\", vertxComponent);" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/vertx-component
Declarative cluster configuration
Declarative cluster configuration Red Hat OpenShift GitOps 1.15 Configuring an OpenShift cluster with cluster configurations by using OpenShift GitOps and creating and synchronizing applications in the default and code mode by using the GitOps CLI Red Hat OpenShift Documentation Team
[ "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: config: env: - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES value: openshift-gitops, <list of namespaces of cluster-scoped Argo CD instances>", "auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller", "- verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - get - list nonResourceURLs: - '*'", "oc edit clusterrole argocd-server oc edit clusterrole argocd-application-controller", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc adm taint nodes -l node-role.kubernetes.io/infra infra=reserved:NoSchedule infra=reserved:NoExecute", "apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true", "apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true tolerations: - effect: NoSchedule key: infra value: reserved - effect: NoExecute key: infra value: reserved", "git clone [email protected]:redhat-developer/openshift-gitops-getting-started.git", "oc create -f openshift-gitops-getting-started/argo/app.yaml", "oc get application -n openshift-gitops", "oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops", "ADMIN_PASSWD=USD(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\\.password}' | base64 -d)", "SERVER_URL=USD(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')", "argocd login --username admin --password USD{ADMIN_PASSWD} USD{SERVER_URL}", "argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing", "argocd app list", "NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET", "argocd app create app-cluster-configs --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git --path cluster --revision main --dest-server https://kubernetes.default.svc --dest-namespace spring-petclinic --directory-recurse --sync-policy none --sync-option Prune=true --sync-option CreateNamespace=true", "oc label ns spring-petclinic \"argocd.argoproj.io/managed-by=openshift-gitops\"", "argocd app list", "oc login -u <username> -p <password> <server_url>", "oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443", "oc config current-context", "oc config set-context --current --namespace openshift-gitops", "export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server", "argocd app list --core", "NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET", "argocd app create app-cluster-configs --core --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git --path cluster --revision main --dest-server https://kubernetes.default.svc --dest-namespace spring-petclinic --directory-recurse --sync-policy none --sync-option Prune=true --sync-option CreateNamespace=true", "oc label ns spring-petclinic \"argocd.argoproj.io/managed-by=openshift-gitops\"", "argocd app list --core", "ADMIN_PASSWD=USD(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\\.password}' | base64 -d)", "SERVER_URL=USD(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')", "argocd login --username admin --password USD{ADMIN_PASSWD} USD{SERVER_URL}", "argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing", "argocd app sync openshift-gitops/app-cluster-configs", "argocd app list", "oc login -u <username> -p <password> <server_url>", "oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443", "oc config current-context", "oc config set-context --current --namespace openshift-gitops", "export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server", "argocd app sync --core openshift-gitops/app-cluster-configs", "argocd app list --core", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: secrets-cluster-role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"*\"]", "kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-role-binding subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: secrets-cluster-role", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: grafana spec: channel: v4 installPlanApproval: Automatic name: grafana-operator source: redhat-operators sourceNamespace: openshift-marketplace", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: ansible-automation-platform apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: patch-me installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace", "persistentvolumes is forbidden: User \"system:serviceaccount:gitops-demo:argocd-argocd-application-controller\" cannot create resource \"persistentvolumes\" in API group \"\" at the cluster scope.", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example 1 namespace: spring-petclinic 2 spec: defaultClusterScopedRoleDisabled: true 3", "argocd.argoproj.io/example configured", "oc get ClusterRoles/<argocd_name>-<argocd_namespace>-<control_plane_component>", "oc get ClusterRoleBindings/<argocd_name>-<argocd_namespace>-<control_plane_component>", "No resources found", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: example-spring-petclinic-argocd-application-controller 1 rules: - verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - '*' apiGroups: - '' resources: 2 - namespaces - persistentvolumes", "kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: example-spring-petclinic-argocd-application-controller subjects: - kind: ServiceAccount name: example-argocd-application-controller namespace: spring-petclinic roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: example-spring-petclinic-argocd-application-controller", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example 1 namespace: spring-petclinic 2 spec: aggregatedClusterRoles: true 3", "argocd.argoproj.io/example configured", "oc describe argocd.argoproj.io/example -n spring-petclinic", "Name: example Namespace: spring-petclinic Labels: <none> Annotations: <none> API Version: argoproj.io/v1beta1 Kind: ArgoCD Metadata: Creation Timestamp: 2024-08-14T08:20:53Z Finalizers: argoproj.io/finalizer Generation: 3 Resource Version: 60437 UID: 57940e54-d60b-4c1a-bc4a-85c81c63ab69 Spec: Aggregated Cluster Roles: true Status: Application Controller: Running Application Set Controller: Unknown Phase: Available 1 Redis: Running Repo: Running Server: Running Sso: Unknown Events: <none>", "oc get ClusterRoles -l app.kubernetes.io/part-of=argocd", "NAME CREATED AT example-spring-petclinic-argocd-application-controller 2024-08-14T08:20:58Z example-spring-petclinic-argocd-application-controller-admin 2024-08-14T09:08:38Z example-spring-petclinic-argocd-application-controller-view 2024-08-14T09:08:38Z example-spring-petclinic-argocd-server 2024-08-14T08:20:59Z", "oc get ClusterRoleBindings -l app.kubernetes.io/part-of=argocd", "NAME ROLE AGE example-spring-petclinic-argocd-application-controller ClusterRole/example-spring-petclinic-argocd-application-controller 54m example-spring-petclinic-argocd-server ClusterRole/example-spring-petclinic-argocd-server 54m", "oc get ClusterRole/<cluster_role_name> -o yaml 1", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: argocds.argoproj.io/name: example argocds.argoproj.io/namespace: spring-petclinic kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"argoproj.io/v1beta1\",\"kind\":\"ArgoCD\",\"metadata\":{\"annotations\":{},\"name\":\"example\",\"namespace\":\"spring-petclinic\"},\"spec\":{\"aggregatedClusterRoles\":true}} rbac.authorization.kubernetes.io/autoupdate: \"true\" creationTimestamp: \"2024-08-14T08:20:58Z\" labels: app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd name: example-spring-petclinic-argocd-application-controller 1 resourceVersion: \"78640\" uid: aeeb2ef5-b531-4fe3-a61a-b5ad8dd8ca6e aggregationRule: 2 clusterRoleSelectors: - matchLabels: app.kubernetes.io/managed-by: spring-petclinic argocd/aggregate-to-controller: \"true\" rules: [] 3", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: argocds.argoproj.io/name: example argocds.argoproj.io/namespace: spring-petclinic kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"argoproj.io/v1beta1\",\"kind\":\"ArgoCD\",\"metadata\":{\"annotations\":{},\"name\":\"example\",\"namespace\":\"spring-petclinic\"},\"spec\":{\"aggregatedClusterRoles\":true}} creationTimestamp: \"2024-08-14T09:59:14Z\" labels: 1 app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd argocd/aggregate-to-controller: \"true\" name: example-spring-petclinic-argocd-application-controller-view 2 resourceVersion: \"78639\" uid: 068b8867-7a0c-4af3-a17a-0560a00eba41 rules: 3 - apiGroups: - '*' resources: - '*' verbs: - get - list - watch - nonResourceURLs: - '*' verbs: - get - list", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: argocds.argoproj.io/name: example argocds.argoproj.io/namespace: spring-petclinic kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"argoproj.io/v1beta1\",\"kind\":\"ArgoCD\",\"metadata\":{\"annotations\":{},\"name\":\"example\",\"namespace\":\"spring-petclinic\"},\"spec\":{\"aggregatedClusterRoles\":true}} rbac.authorization.kubernetes.io/autoupdate: \"true\" creationTimestamp: \"2024-08-14T09:59:15Z\" labels: 1 app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd argocd/aggregate-to-controller: \"true\" name: example-spring-petclinic-argocd-application-controller-admin 2 resourceVersion: \"78642\" uid: e2d35b6f-0832-4993-8b24-915a725454f9 aggregationRule: 3 clusterRoleSelectors: - matchLabels: app.kubernetes.io/managed-by: spring-petclinic argocd/aggregate-to-admin: \"true\" rules: null 4", "oc apply -n <namespace> -f <cluster_role_name>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: user-application-controller 1 labels: 2 app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd argocd/aggregate-to-admin: 'true' rules: 3 - verbs: - '*' apiGroups: - '' resources: - namespaces - persistentvolumeclaims - persistentvolumes - configmaps - verbs: - '*' apiGroups: - compliance.openshift.io resources: - scansettingbindings", "clusterrole.rbac.authorization.k8s.io/user-application-controller created", "oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller-admin -o yaml", "aggregationRule: clusterRoleSelectors: - matchLabels: app.kubernetes.io/managed-by: spring-petclinic argocd/aggregate-to-admin: \"true\" apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: argocds.argoproj.io/name: example argocds.argoproj.io/namespace: spring-petclinic kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"argoproj.io/v1beta1\",\"kind\":\"ArgoCD\",\"metadata\":{\"annotations\":{},\"name\":\"example\",\"namespace\":\"spring-petclinic\"},\"spec\":{\"aggregatedClusterRoles\":true}} creationTimestamp: \"2024-08-14T09:59:15Z\" labels: app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd argocd/aggregate-to-controller: \"true\" name: example-spring-petclinic-argocd-application-controller-admin resourceVersion: \"79202\" uid: e2d35b6f-0832-4993-8b24-915a725454f9 rules: - apiGroups: - \"\" resources: - namespaces - persistentvolumeclaims - persistentvolumes - configmaps verbs: - '*' - apiGroups: - compliance.openshift.io resources: - scansettingbindings verbs: - '*'", "oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller -o yaml", "aggregationRule: clusterRoleSelectors: - matchLabels: app.kubernetes.io/managed-by: spring-petclinic argocd/aggregate-to-controller: \"true\" apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: argocds.argoproj.io/name: example argocds.argoproj.io/namespace: spring-petclinic kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"argoproj.io/v1beta1\",\"kind\":\"ArgoCD\",\"metadata\":{\"annotations\":{},\"name\":\"example\",\"namespace\":\"spring-petclinic\"},\"spec\":{\"aggregatedClusterRoles\":true}} rbac.authorization.kubernetes.io/autoupdate: \"true\" creationTimestamp: \"2024-08-14T08:20:58Z\" labels: app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd name: example-spring-petclinic-argocd-application-controller resourceVersion: \"79203\" uid: aeeb2ef5-b531-4fe3-a61a-b5ad8dd8ca6e rules: - apiGroups: - \"\" resources: - namespaces - persistentvolumeclaims - persistentvolumes - configmaps verbs: - '*' - apiGroups: - compliance.openshift.io resources: - scansettingbindings verbs: - '*' - apiGroups: - '*' resources: - '*' verbs: - get - list - watch - nonResourceURLs: - '*' verbs: - get - list", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: enabled: true 1 replicas: 3 2 env: 3 - name: ARGOCD_CONTROLLER_SHARDING_ALGORITHM value: round-robin logLevel: debug 4", "time=\"2023-12-13T09:05:34Z\" level=info msg=\"ArgoCD Application Controller is starting\" built=\"2023-12-01T19:21:49Z\" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=openshift-gitops version=v2.9.2+c5ea5c4 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Processing clusters from shard 1\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" 1 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"appResyncPeriod=3m0s, appHardResyncPeriod=0s\"", "time=\"2023-12-13T09:05:34Z\" level=debug msg=\"ClustersList has 3 items\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Adding cluster with id= and name=in-cluster to cluster's map\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Adding cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 and name=in-cluster2 to cluster's map\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Adding cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w and name=in-cluster3 to cluster's map\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id= will be processed by shard 0\" 1 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1\" 2 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2\" 3", "oc patch argocd <argocd_instance> -n <namespace> --patch='{\"spec\":{\"controller\":{\"sharding\":{\"enabled\":true,\"replicas\":<value>}}}}' --type=merge", "argocd.argoproj.io/<argocd_instance> patched", "oc patch argocd <argocd_instance> -n <namespace> --patch='{\"spec\":{\"controller\":{\"env\":[{\"name\":\"ARGOCD_CONTROLLER_SHARDING_ALGORITHM\",\"value\":\"round-robin\"}]}}}' --type=merge", "argocd.argoproj.io/<argocd_instance> patched", "oc get pods -l app.kubernetes.io/name=<argocd_instance>-application-controller -n <namespace>", "NAME READY STATUS RESTARTS AGE <argocd_instance>-application-controller-0 1/1 Running 0 11s <argocd_instance>-application-controller-1 1/1 Running 0 32s <argocd_instance>-application-controller-2 1/1 Running 0 22s", "oc logs <argocd_application_controller_pod> -n <namespace>", "time=\"2023-12-13T09:05:34Z\" level=info msg=\"ArgoCD Application Controller is starting\" built=\"2023-12-01T19:21:49Z\" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=<namespace> version=v2.9.2+c5ea5c4 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Processing clusters from shard 1\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" 1 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"appResyncPeriod=3m0s, appHardResyncPeriod=0s\"", "oc patch argocd <argocd_instance> -n <namespace> --patch='{\"spec\":{\"controller\":{\"logLevel\":\"debug\"}}}' --type=merge", "argocd.argoproj.io/<argocd_instance> patched", "oc logs <argocd_application_controller_pod> -n <namespace> | grep \"processed by shard\"", "time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id= will be processed by shard 0\" 1 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1\" 2 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2\" 3", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: dynamicScalingEnabled: true 1 minShards: 1 2 maxShards: 3 3 clustersPerShard: 1 4", "oc patch argocd <argocd_instance> -n <namespace> --type=merge --patch='{\"spec\":{\"controller\":{\"sharding\":{\"dynamicScalingEnabled\":true,\"minShards\":<value>,\"maxShards\":<value>,\"clustersPerShard\":<value>}}}}'", "oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch='{\"spec\":{\"controller\":{\"sharding\":{\"dynamicScalingEnabled\":true,\"minShards\":1,\"maxShards\":3,\"clustersPerShard\":1}}}}' 1", "argocd.argoproj.io/openshift-gitops patched", "oc get argocd <argocd_instance> -n <namespace> -o jsonpath='{.spec.controller.sharding}'", "oc get argocd openshift-gitops -n openshift-gitops -o jsonpath='{.spec.controller.sharding}'", "{\"dynamicScalingEnabled\":true,\"minShards\":1,\"maxShards\":3,\"clustersPerShard\":1}", "oc get pods -n <namespace> -l app.kubernetes.io/name=<argocd_instance>-application-controller", "oc get pods -n openshift-gitops -l app.kubernetes.io/name=openshift-gitops-application-controller", "NAME READY STATUS RESTARTS AGE openshift-gitops-application-controller-0 1/1 Running 0 2m 1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html-single/declarative_cluster_configuration/index
5.3. Device Assignment and SR-IOV
5.3. Device Assignment and SR-IOV The following diagram demonstrates the involvement of the kernel in the Device Assignment and SR-IOV architectures. Figure 5.2. Device assignment and SR-IOV Device assignment presents the entire device to the guest. SR-IOV needs support in drivers and hardware, including the NIC and the system board and allows multiple virtual devices to be created and passed into different guests. A vendor-specific driver is required in the guest, however, SR-IOV offers the lowest latency of any network option.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-networking-device_assignment_and_sriov
Appendix C. Building Cloud Images for Red Hat Satellite
Appendix C. Building Cloud Images for Red Hat Satellite Use this section to build and register images to Red Hat Satellite. You can use a preconfigured Red Hat Enterprise Linux KVM guest QCOW2 image: Latest RHEL 8 KVM Guest Image Latest RHEL 7 KVM Guest Image These images contain cloud-init . To function properly, they must use ec2-compatible metadata services for provisioning an SSH key. Note For the KVM guest images: The root account in the image is disabled, but sudo access is granted to a special user named cloud-user . There is no root password set for this image. The root password is locked in /etc/shadow by placing !! in the second field. If you want to create custom Red Hat Enterprise Linux images, see Composing a customized Red Hat Enterprise Linux 8 Image or Image Builder Guide for Red Hat Enterprise Linux 7 C.1. Creating Custom Red Hat Enterprise Linux Images Prerequisites Use a Linux host machine to create an image. In this example, we use a Red Hat Enterprise Linux 7 Workstation. Use virt-manager on your workstation to complete this procedure. If you create the image on a remote server, connect to the server from your workstation with virt-manager . A Red Hat Enterprise Linux 7 or 6 ISO file (see Red Hat Enterprise Linux 7.4 Binary DVD or Red Hat Enterprise Linux 6.9 Binary DVD ). For more information about installing a Red Hat Enterprise Linux Workstation, see Red Hat Enterprise Linux 7 Installation Guide . Before you can create custom images, install the following packages: Install libvirt , qemu-kvm , and graphical tools: Install the following command line tools: Note In the following procedures, enter all commands with the [root@host]# prompt on the workstation that hosts the libvirt environment. C.2. Supported Clients in Registration Satellite supports the following operating systems and architectures for registration. Supported Host Operating Systems The hosts can use the following operating systems: Red Hat Enterprise Linux 9, 8, 7 Red Hat Enterprise Linux 6 with the ELS Add-On Supported Host Architectures The hosts can use the following architectures: i386 x86_64 s390x ppc_64 C.3. Configuring a Host for Registration Configure your host for registration to Satellite Server or Capsule Server. Prerequisites The host must be using a supported operating system. For more information, see Section C.2, "Supported Clients in Registration" . Procedure Ensure that a time-synchronization tool is enabled and running on the host. C.4. Registering a Host You can register a host by using registration templates and set up various integration features and host tools during the registration process. Prerequisites Your user account has a role assigned that has the create_hosts permission. You must have root privileges on the host that you want to register. Satellite Server, any Capsule Servers, and all hosts must be synchronized with the same NTP server, and have a time synchronization tool enabled and running. An activation key must be available for the host. For more information, see Managing Activation Keys in Managing Content . If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . If your Satellite Server or Capsule Server is behind an HTTP proxy, configure the Subscription Manager on your host to use the HTTP proxy for connection. For more information, see How to access Red Hat Subscription Manager (RHSM) through a firewall or proxy in the Red Hat Knowledgebase . Procedure In the Satellite web UI, navigate to Hosts > Register Host . Optional: Select a different Organization . Optional: Select a different Location . Optional: From the Host Group list, select the host group to associate the hosts with. Fields that inherit value from Host group : Operating system , Activation Keys and Lifecycle environment . Optional: From the Operating system list, select the operating system of hosts that you want to register. Optional: From the Capsule list, select the Capsule to register hosts through. Optional: Select the Insecure option, if you want to make the first call insecure. During this first call, hosts download the CA file from Satellite. Hosts will use this CA file to connect to Satellite with all future calls making them secure. Red Hat recommends that you avoid insecure calls. If an attacker, located in the network between Satellite and a host, fetches the CA file from the first insecure call, the attacker will be able to access the content of the API calls to and from the registered host and the JSON Web Tokens (JWT). Therefore, if you have chosen to deploy SSH keys during registration, the attacker will be able to access the host using the SSH key. Instead, you can manually copy and install the CA file on each host before registering the host. To do this, find where Satellite stores the CA file by navigating to Administer > Settings > Authentication and locating the value of the SSL CA file setting. Copy the CA file to the /etc/pki/ca-trust/source/anchors/ directory on hosts and enter the following commands: Then register the hosts with a secure curl command, such as: The following is an example of the curl command with the --insecure option: Select the Advanced tab. From the Setup REX list, select whether you want to deploy Satellite SSH keys to hosts or not. If set to Yes , public SSH keys will be installed on the registered host. The inherited value is based on the host_registration_remote_execution parameter. It can be inherited, for example from a host group, an operating system, or an organization. When overridden, the selected value will be stored on host parameter level. From the Setup Insights list, select whether you want to install insights-client and register the hosts to Insights. The Insights tool is available for Red Hat Enterprise Linux only. It has no effect on other operating systems. You must enable the following repositories on a registered machine: RHEL 6: rhel-6-server-rpms RHEL 7: rhel-7-server-rpms RHEL 8: rhel-8-for-x86_64-appstream-rpms The insights-client package is installed by default on RHEL 8 except in environments whereby RHEL 8 was deployed with "Minimal Install" option. Optional: In the Install packages field, list the packages (separated with spaces) that you want to install on the host upon registration. This can be set by the host_packages parameter. Optional: Select the Update packages option to update all packages on the host upon registration. This can be set by the host_update_packages parameter. Optional: In the Repository field, enter a repository to be added before the registration is performed. For example, it can be useful to make the subscription-manager package available for the purpose of the registration. For Red Hat family distributions, enter the URL of the repository, for example http://rpm.example.com/ . Optional: In the Repository GPG key URL field, specify the public key to verify the signatures of GPG-signed packages. It needs to be specified in the ASCII form with the GPG public key header. Optional: In the Token lifetime (hours) field, change the validity duration of the JSON Web Token (JWT) that Satellite uses for authentication. The duration of this token defines how long the generated curl command works. You can set the duration to 0 - 999 999 hours or unlimited. Note that Satellite applies the permissions of the user who generates the curl command to authorization of hosts. If the user loses or gains additional permissions, the permissions of the JWT change too. Therefore, do not delete, block, or change permissions of the user during the token duration. The scope of the JWTs is limited to the registration endpoints only and cannot be used anywhere else. Optional: In the Remote Execution Interface field, enter the identifier of a network interface that hosts must use for the SSH connection. If you keep this field blank, Satellite uses the default network interface. In the Activation Keys field, enter one or more activation keys to assign to hosts. Optional: Select the Lifecycle environment . Optional: Select the Ignore errors option if you want to ignore subscription manager errors. Optional: Select the Force option if you want to remove any katello-ca-consumer rpms before registration and run subscription-manager with the --force argument. Click the Generate button. Copy the generated curl command. On the host that you want to register, run the curl command as root . C.5. Installing the Katello Agent You can install the Katello agent to remotely update Satellite clients. Note The Katello agent is deprecated and will be removed in a future Satellite version. Migrate your processes to use the remote execution feature to update clients remotely. For more information, see Migrating Hosts from Katello Agent to Remote Execution in Managing Hosts . The katello-agent package depends on the gofer package that provides the goferd service. Prerequisites You have enabled the Satellite Client 6 repository on Satellite Server. For more information, see Enabling the Satellite Client 6 Repository in Installing Satellite Server in a Connected Network Environment . You have synchronized the Satellite Client 6 repository on Satellite Server. For more information, see Synchronizing the Satellite Client 6 Repository in Installing Satellite Server in a Connected Network Environment . You have enabled the Satellite Client 6 repository on the client. Procedure Install the katello-agent package: Start the goferd service: C.6. Installing and Configuring Puppet Agent on a Host Manually Install and configure the Puppet agent on a host manually. Prerequisites The host must have a Puppet environment assigned to it. The Satellite Client 6 repository must be enabled and synchronized to Satellite Server, and enabled on the host. For more information, see Importing Content in Managing Content . Procedure Log in to the host as the root user. Install the Puppet agent package. On hosts running Red Hat Enterprise Linux 8 and above: On hosts running Red Hat Enterprise Linux 7 and below: Add the Puppet agent to PATH in your current shell using the following script: Configure the Puppet agent. Set the environment parameter to the name of the Puppet environment to which the host belongs: Start the Puppet agent service: Create a certificate for the host: In the Satellite web UI, navigate to Infrastructure > Capsules . From the list in the Actions column for the required Capsule Server, select Certificates . Click Sign to the right of the required host to sign the SSL certificate for the Puppet agent. On the host, run the Puppet agent again: Additional Resources For more information about Puppet, see Managing Configurations Using Puppet Integration in Red Hat Satellite . C.7. Completing the Red Hat Enterprise Linux 7 Image Procedure Update the system: Install the cloud-init packages: Open the /etc/cloud/cloud.cfg configuration file: Under the heading cloud_init_modules , add: The resolv-conf option automatically configures the resolv.conf when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain and other options. Open the /etc/sysconfig/network file: Add the following line to avoid problems accessing the EC2 metadata service: Un-register the virtual machine so that the resulting image does not contain the same subscription details for every instance cloned based on it: Power off the instance: On your Red Hat Enterprise Linux Workstation, connect to the terminal as the root user and navigate to the /var/lib/libvirt/images/ directory: Reset and clean the image using the virt-sysprep command so it can be used to create instances without issues: Reduce image size using the virt-sparsify command. This command converts any free space within the disk image back to free space within the host: This creates a new rhel7-cloud.qcow2 file in the location where you enter the command. C.8. Completing the Red Hat Enterprise Linux 6 Image Procedure Update the system: Install the cloud-init packages: Edit the /etc/cloud/cloud.cfg configuration file and under cloud_init_modules add: The resolv-conf option automatically configures the resolv.conf configuration file when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain , and other options. To prevent network issues, create the /etc/udev/rules.d/75-persistent-net-generator.rules file as follows: This prevents /etc/udev/rules.d/70-persistent-net.rules file from being created. If /etc/udev/rules.d/70-persistent-net.rules is created, networking might not function properly when booting from snapshots (the network interface is created as "eth1" rather than "eth0" and IP address is not assigned). Add the following line to /etc/sysconfig/network to avoid problems accessing the EC2 metadata service: Un-register the virtual machine so that the resulting image does not contain the same subscription details for every instance cloned based on it: Power off the instance: On your Red Hat Enterprise Linux Workstation, log in as root and reset and clean the image using the virt-sysprep command so it can be used to create instances without issues: Reduce image size using the virt-sparsify command. This command converts any free space within the disk image back to free space within the host: This creates a new rhel6-cloud.qcow2 file in the location where you enter the command. Note You must manually resize the partitions of instances based on the image in accordance with the disk space in the flavor that is applied to the instance. C.8.1. steps Repeat the procedures for every image that you want to provision with Satellite. Move the image to the location where you want to store for future use. C.9. Steps Repeat the procedures for every image that you want to provision with Satellite. Move the image to the location where you want to store for future use.
[ "yum install virt-manager virt-viewer libvirt qemu-kvm", "yum install virt-install libguestfs-tools-c", "update-ca-trust enable update-ca-trust", "curl -sS https://satellite.example.com/register", "curl -sS --insecure https://satellite.example.com/register", "yum install katello-agent", "systemctl start goferd", "dnf install puppet-agent", "yum install puppet-agent", ". /etc/profile.d/puppet-agent.sh", "puppet config set server satellite.example.com --section agent puppet config set environment My_Puppet_Environment --section agent", "puppet resource service puppet ensure=running enable=true", "puppet ssl bootstrap", "puppet ssl bootstrap", "{package-update}", "yum install cloud-utils-growpart cloud-init", "vi /etc/cloud/cloud.cfg", "- resolv-conf", "vi /etc/sysconfig/network", "NOZEROCONF=yes", "subscription-manager repos --disable=* subscription-manager unregister", "poweroff", "cd /var/lib/libvirt/images/", "virt-sysprep -d rhel7", "virt-sparsify --compress rhel7.qcow2 rhel7-cloud.qcow2", "{package-update}", "yum install cloud-utils-growpart cloud-init", "- resolv-conf", "echo \"#\" > /etc/udev/rules.d/75-persistent-net-generator.rules", "NOZEROCONF=yes", "subscription-manager repos --disable=* subscription-manager unregister yum clean all", "poweroff", "virt-sysprep -d rhel6", "virt-sparsify --compress rhel6.qcow2 rhel6-cloud.qcow2" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/building_cloud_images_provisioning
Chapter 177. Jing Component
Chapter 177. Jing Component Available as of Camel version 1.1 The Jing component uses the Jing Library to perform XML validation of the message body using either RelaxNG XML Syntax RelaxNG Compact Syntax Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jing</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> Note that the MSV component can also support RelaxNG XML syntax. 177.1. URI format Camel 2.16 jing:someLocalOrRemoteResource From Camel 2.16 the component use jing as name, and you can use the option compactSyntax to turn on either RNG or RNC mode. 177.2. Options The Jing component has no options. The Jing endpoint is configured using URI syntax: with the following path and query parameters: 177.2.1. Path Parameters (1 parameters): Name Description Default Type resourceUri Required URL to a local resource on the classpath or a full URL to a remote resource or resource on the file system which contains the schema to validate against. String 177.2.2. Query Parameters (2 parameters): Name Description Default Type compactSyntax (producer) Whether to validate using RelaxNG compact syntax or not. By default this is false for using RelaxNG XML Syntax (rng) And true is for using RelaxNG Compact Syntax (rnc) false boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 177.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.jing.enabled Enable jing component true Boolean camel.component.jing.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 177.4. Example The following example shows how to configure a route from the endpoint direct:start which then goes to one of two endpoints, either mock:valid or mock:invalid based on whether or not the XML matches the given RelaxNG Compact Syntax schema (which is supplied on the classpath). 177.5. See Also Configuring Camel Component Endpoint Getting Started
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jing</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "jing:someLocalOrRemoteResource", "jing:resourceUri" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/jing-component
probe::nfs.fop.fsync
probe::nfs.fop.fsync Name probe::nfs.fop.fsync - NFS client fsync operation Synopsis nfs.fop.fsync Values ndirty number of dirty pages ino inode number dev device identifier
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-fop-fsync
Chapter 1. Red Hat Process Automation Manager Spring Boot business applications
Chapter 1. Red Hat Process Automation Manager Spring Boot business applications Spring Framework is a Java platform that provides comprehensive infrastructure support for developing Java applications. Spring Boot is a lightweight framework based on Spring Boot starters. Spring Boot starters are pom.xml files that contain a set of dependency descriptors that you can include in your Spring Boot project. Red Hat Process Automation Manager Spring Boot business applications are flexible, UI-agnostic logical groupings of individual services that provide certain business capabilities. Business applications are based on Spring Boot starters. They are usually deployed separately and can be versioned individually. A complete business application enables a domain to achieve specific business goals, for example, order management or accommodation management. After you create and configure your business application, you can deploy it to an existing service or to the cloud, through OpenShift. Business applications can contain one or more of the following projects and more than one project of the same type: Business assets (KJAR): Contains business processes, rules, and forms and are easily imported into Business Central. Data model: Data model projects provide common data structures that are shared between the service projects and business assets projects. This enables proper encapsulation, promotes reuse, and reduces shortcuts. Each service project can expose its own public data model. Dynamic assets: Contains assets that you can use with case management. Service: A deployable project that provides the actual service with various capabilities. It includes the business logic that operates your business. In most cases, a service project includes business assets and data model projects. A business application can split services into smaller component service projects for better manageability.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/bus_app_business-applications
Chapter 2. Recovering a single server with replication
Chapter 2. Recovering a single server with replication If a single server is severely disrupted or lost, having multiple replicas ensures you can create a replacement replica and quickly restore the former level of redundancy. If your IdM topology contains an integrated Certificate Authority (CA), the steps for removing and replacing a damaged replica differ for the CA renewal server and other replicas. 2.1. Recovering from losing the CA renewal server If the Certificate Authority (CA) renewal server is lost, you must first promote another CA replica to fulfill the CA renewal server role, and then deploy a replacement CA replica. Prerequisites Your deployment uses IdM's internal Certificate Authority (CA). Another Replica in the environment has CA services installed. Warning An IdM deployment is unrecoverable if: The CA renewal server has been lost. No other server has a CA installed. No backup of a replica with the CA role exists. It is critical to make backups from a replica with the CA role so certificate data is protected. For more information about creating and restoring from backups, see Preparing for data loss with IdM backups . Procedure From another replica in your environment, promote another CA replica in the environment to act as the new CA renewal server. See Changing and resetting IdM CA renewal server . From another replica in your environment, remove replication agreements to the lost CA renewal server. See Removing server from topology using the CLI . Install a new CA Replica to replace the lost CA replica. See Installing an IdM replica with a CA . Update DNS to reflect changes in the replica topology. If IdM DNS is used, DNS service records are updated automatically. Verify IdM clients can reach IdM servers. See Adjusting IdM clients during recovery . Verification Test the Kerberos server on the new replica by successfully retrieving a Kerberos Ticket-Granting-Ticket as an IdM user. Test the Directory Server and SSSD configuration by retrieving user information. Test the CA configuration with the ipa cert-show command. Additional resources Using IdM CA renewal server 2.2. Recovering from losing a regular replica To replace a replica that is not the Certificate Authority (CA) renewal server, remove the lost replica from the topology and install a new replica in its place. Prerequisites The CA renewal server is operating properly. If the CA renewal server has been lost, see Recovering from losing the CA renewal server . Procedure Remove replication agreements to the lost server. See Uninstalling an IdM server . Deploy a new replica with the corresponding services (CA, KRA, DNS). See Installing an IdM replica . Update DNS to reflect changes in the replica topology. If IdM DNS is used, DNS service records are updated automatically. Verify IdM clients can reach IdM servers. See Adjusting IdM clients during recovery . Verification Test the Kerberos server on the new replica by successfully retrieving a Kerberos Ticket-Granting-Ticket as an IdM user. Test the Directory Server and SSSD configuration on the new replica by retrieving user information.
[ "kinit admin Password for [email protected]: klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 10/31/2019 15:51:37 11/01/2019 15:51:02 HTTP/[email protected] 10/31/2019 15:51:08 11/01/2019 15:51:02 krbtgt/[email protected]", "ipa user-show admin User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash Principal alias: [email protected] UID: 1965200000 GID: 1965200000 Account disabled: False Password: True Member of groups: admins, trust admins Kerberos keys available: True", "ipa cert-show 1 Issuing CA: ipa Certificate: MIIEgjCCAuqgAwIBAgIjoSIP Subject: CN=Certificate Authority,O=EXAMPLE.COM Issuer: CN=Certificate Authority,O=EXAMPLE.COM Not Before: Thu Oct 31 19:43:29 2019 UTC Not After: Mon Oct 31 19:43:29 2039 UTC Serial number: 1 Serial number (hex): 0x1 Revoked: False", "kinit admin Password for [email protected]: klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 10/31/2019 15:51:37 11/01/2019 15:51:02 HTTP/[email protected] 10/31/2019 15:51:08 11/01/2019 15:51:02 krbtgt/[email protected]", "ipa user-show admin User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash Principal alias: [email protected] UID: 1965200000 GID: 1965200000 Account disabled: False Password: True Member of groups: admins, trust admins Kerberos keys available: True" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/performing_disaster_recovery_with_identity_management/recovering-a-single-server-with-replication_performing-disaster-recovery
Chapter 28. SystemProperty schema reference
Chapter 28. SystemProperty schema reference Used in: JvmOptions Property Property type Description name string The system property name. value string The system property value.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-systemproperty-reference
Chapter 8. Using RBAC to define and apply permissions
Chapter 8. Using RBAC to define and apply permissions 8.1. RBAC overview Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project. Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects. Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action. Authorization is managed using: Authorization object Description Rules Sets of permitted verbs on a set of objects. For example, whether a user or service account can create pods. Roles Collections of rules. You can associate, or bind, users and groups to multiple roles. Bindings Associations between users and/or groups with a role. There are two levels of RBAC roles and bindings that control authorization: RBAC level Description Cluster RBAC Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. Local RBAC Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation. This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles. During evaluation, both the cluster role bindings and the local role bindings are used. For example: Cluster-wide "allow" rules are checked. Locally-bound "allow" rules are checked. Deny by default. 8.1.1. Default cluster roles OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally. Important It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly. Default cluster role Description admin A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota. basic-user A user that can get basic information about projects and users. cluster-admin A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. cluster-status A user that can get basic cluster status information. cluster-reader A user that can get or view most of the objects but cannot modify them. edit A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. self-provisioner A user that can create their own projects. view A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin , plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin . The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below. Warning The get pods/exec , get pods/* , and get * rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges . 8.1.2. Evaluating authorization OpenShift Container Platform evaluates authorization by using: Identity The user name and list of groups that the user belongs to. Action The action you perform. In most cases, this consists of: Project : The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities. Verb : The action itself: get , list , create , update , delete , deletecollection , or watch . Resource name : The API endpoint that you access. Bindings The full list of bindings, the associations between users or groups with a role. OpenShift Container Platform evaluates authorization by using the following steps: The identity and the project-scoped action is used to find all bindings that apply to the user or their groups. Bindings are used to locate all the roles that apply. Roles are used to find all the rules that apply. The action is checked against each rule to find a match. If no matching rule is found, the action is then denied by default. Tip Remember that users and groups can be associated with, or bound to, multiple roles at the same time. Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with. Important The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin . Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level. 8.1.2.1. Cluster role aggregation The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation , where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources. 8.2. Projects and namespaces A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces. Namespaces provide a unique scope for: Named resources to avoid basic naming collisions. Delegated management authority to trusted users. The ability to limit community resource consumption. Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users. A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects. Projects can have a separate name , displayName , and description . The mandatory name is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters. The optional displayName is how the project is displayed in the web console (defaults to name ). The optional description can be a more detailed description of the project and is also visible in the web console. Each project scopes its own set of: Object Description Objects Pods, services, replication controllers, etc. Policies Rules for which users can or cannot perform actions on objects. Constraints Quotas for each kind of object that can be limited. Service accounts Service accounts act automatically with designated access to objects in the project. Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects. Developers and administrators can interact with projects by using the CLI or the web console. 8.3. Default projects OpenShift Container Platform comes with a number of default projects, and projects starting with openshift- are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 8.4. Viewing cluster roles and bindings You can use the oc CLI to view cluster roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the cluster roles and bindings. Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings. Procedure To view the cluster roles and their associated rule sets: USD oc describe clusterrole.rbac Example output Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*] ... To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles: USD oc describe clusterrolebinding.rbac Example output Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api ... 8.5. Viewing local roles and bindings You can use the oc CLI to view local roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the local roles and bindings: Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings. Users with the admin default cluster role bound locally can view and manage roles and bindings in that project. Procedure To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project: USD oc describe rolebinding.rbac To view the local role bindings for a different project, add the -n flag to the command: USD oc describe rolebinding.rbac -n joe-project Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project 8.6. Adding roles to users You can use the oc adm administrator CLI to manage the roles and bindings. Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands. You can bind any of the default cluster roles to local users or groups in your project. Procedure Add a role to a user in a specific project: USD oc adm policy add-role-to-user <role> <user> -n <project> For example, you can add the admin role to the alice user in joe project by running: USD oc adm policy add-role-to-user admin alice -n joe Tip You can alternatively apply the following YAML to add the role to the user: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice View the local role bindings and verify the addition in the output: USD oc describe rolebinding.rbac -n <project> For example, to view the local role bindings for the joe project: USD oc describe rolebinding.rbac -n joe Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe 1 The alice user has been added to the admins RoleBinding . 8.7. Creating a local role You can create a local role for a project and then bind it to a user. Procedure To create a local role for a project, run the following command: USD oc create role <name> --verb=<verb> --resource=<resource> -n <project> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to <project> , the project name For example, to create a local role that allows a user to view pods in the blue project, run the following command: USD oc create role podview --verb=get --resource=pod -n blue To bind the new role to a user, run the following command: USD oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue 8.8. Creating a cluster role You can create a cluster role. Procedure To create a cluster role, run the following command: USD oc create clusterrole <name> --verb=<verb> --resource=<resource> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to For example, to create a cluster role that allows a user to view pods, run the following command: USD oc create clusterrole podviewonly --verb=get --resource=pod 8.9. Local role binding commands When you manage a user or group's associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used. You can use the following commands for local RBAC management. Table 8.1. Local role binding operations Command Description USD oc adm policy who-can <verb> <resource> Indicates which users can perform an action on a resource. USD oc adm policy add-role-to-user <role> <username> Binds a specified role to specified users in the current project. USD oc adm policy remove-role-from-user <role> <username> Removes a given role from specified users in the current project. USD oc adm policy remove-user <username> Removes specified users and all of their roles in the current project. USD oc adm policy add-role-to-group <role> <groupname> Binds a given role to specified groups in the current project. USD oc adm policy remove-role-from-group <role> <groupname> Removes a given role from specified groups in the current project. USD oc adm policy remove-group <groupname> Removes specified groups and all of their roles in the current project. 8.10. Cluster role binding commands You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources. Table 8.2. Cluster role binding operations Command Description USD oc adm policy add-cluster-role-to-user <role> <username> Binds a given role to specified users for all projects in the cluster. USD oc adm policy remove-cluster-role-from-user <role> <username> Removes a given role from specified users for all projects in the cluster. USD oc adm policy add-cluster-role-to-group <role> <groupname> Binds a given role to specified groups for all projects in the cluster. USD oc adm policy remove-cluster-role-from-group <role> <groupname> Removes a given role from specified groups for all projects in the cluster. 8.11. Creating a cluster admin The cluster-admin role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources. Prerequisites You must have created a user to define as the cluster admin. Procedure Define the user as a cluster admin: USD oc adm policy add-cluster-role-to-user cluster-admin <user>
[ "oc describe clusterrole.rbac", "Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]", "oc describe clusterrolebinding.rbac", "Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api", "oc describe rolebinding.rbac", "oc describe rolebinding.rbac -n joe-project", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project", "oc adm policy add-role-to-user <role> <user> -n <project>", "oc adm policy add-role-to-user admin alice -n joe", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc describe rolebinding.rbac -n <project>", "oc describe rolebinding.rbac -n joe", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe", "oc create role <name> --verb=<verb> --resource=<resource> -n <project>", "oc create role podview --verb=get --resource=pod -n blue", "oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue", "oc create clusterrole <name> --verb=<verb> --resource=<resource>", "oc create clusterrole podviewonly --verb=get --resource=pod", "oc adm policy add-cluster-role-to-user cluster-admin <user>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authentication_and_authorization/using-rbac
20.10. Configuring Time-Based Account Lockout Policies
20.10. Configuring Time-Based Account Lockout Policies Aside from locking accounts for failed authentication attempts, another method of defining an account lockout policy is to base it on account inactivity or an account age. The Account Policy Plug-in uses a relative time setting to determine whether an account should be locked. Note Roles or classes of service can be used to inactivate accounts based on absolute account times. For example, a CoS can be created that inactivates every account created before a certain date. The Account Policy Plug-in requires three configuration entries: A configuration entry for the plug-in itself. This sets global values that are used for all account policies configured on that server. An account policy configuration entry. This entry is within the user directory and is essentially a template which is referenced and applied to user account entries. An entry which applies the account policy entry. A user account can reference an account policy directly or a CoS or role can be used to apply account policies to sets of user accounts automatically. Note An account policy is applied through the acctPolicySubentry attribute. While this attribute can be added directly to user accounts, this attribute is single-valued - which means that only one account policy can be applied to that account. That may be fine in most cases. However, an organization could realistically create two account policies, one for account inactivity and then another for account expiration based on age. Using a CoS to apply account policies allows multiple account policies to be used for an account. 20.10.1. Account Policy Plug-in Syntax The Account Policy Plug-in itself only has two configuration attributes: nsslapd-pluginEnabled , which sets whether the plug-in is enabled or disabled. This attribute is off by default. nsslapd-pluginarg0 , which points to he DN of the plug-in configuration directory. The configuration entry is usually a child entry of the plug-in itself, such as cn=config,cn=Account Policy Plugin,cn=plugins,cn=config . Past that, account policies are defined in two parts: The plug-in configuration entry identified in the nsslapd-pluginarg0 attribute. This sets global configuration for the plug-in to use to identify account policy configuration entries and to manage user account entries. These settings apply across the server. The configuration entry attributes are described in the Account Policy Plug-in Attributes section in the Red Hat Directory Server Configuration, Command, and File Reference . The account policy configuration entry. This is much like a template entry, which sets specific values for the account policies. User accounts - either directly or through CoS entries - reference this account policy entry. The account policy and user entry attributes are described in the following table: Table 20.2. Account Policy Entry and User Entry Attributes Attribute Definition Configuration or User Entry accountpolicy (object class) Defines a template entry for account inactivation or expiration policies. Configuration accountInactivityLimit (attribute) Sets the time period, in seconds, from the last login time of an account before that account is locked for inactivity. Configuration acctPolicySubentry (attribute) Identifies any entry which belongs to an account policy (specifically, an account lockout policy). The value of this attribute points to the DN of the account policy which is applied to the entry. User createTimestamp (operational attribute) Contains the date and time that the entry was initially created. User lastLoginTime (operational attribute) Contains a timestamp of the last time that the given account authenticated to the directory. User For further details, see the attribute's description in the Red Hat Directory Server Configuration, Command, and File Reference 20.10.2. Account Inactivity and Account Expiration The Account Policy plug-in enables you to set up: account expiration: Accounts are disabled a certain amount of time after you created an account. account inactivity: Accounts are disabled a certain amount of time after the last successful login. This enables you to automatically disable unused accounts. Disabled accounts are no longer able to log in. To set up the Account Policy plug-in: Enable the Account Policy Plug-in: Set the plug-in configuration entry: Create the plug-in configuration entry: To use CoS or roles with account policies, set the alwaysRecordLogin value to yes . This means every entry has a login time recorded, even if it does not have the acctPolicySubentry attribute. Set the primary attribute to use for the account policy evaluation as value for stateAttrName . For account inactivity, use the lastLoginTime attribute. For a simple account expiration time, use createTimestamp attribute. You can set a secondary attribute in altStateAttrName , that is checked if the primary one defined in stateAttrName does not exist. If no attribute is specified as alternative the default value createTimestamp is used. Warning If the value for the primary attribute is set to lastLoginTime and altStateAttrName to createTimestamp , users in existing environments are automatically locked out when their accounts do not have the lastLoginTime attribute and the createTimestamp is older than the configured inactivity period. To avert this situation, set the alternative attribute to 1.1 . This explicitly states to use no attribute as alternative. The lastLoginTime attribute will be created automatically after the user logs in the time. Set the attribute to use to show which entries have an account policy applied to them ( acctPolicySubentry ). Set the attribute in the account policy which is used to set the actual timeout period, in seconds ( accountInactivityLimit ). Restart the server to load the new plug-in configuration: Define an account policy: Create the class of service template entry: Account policies can be defined directly on user entries, instead of using a CoS. However, using a CoS allows an account policy to be applied and updated reliably for multiple entries and it allows multiple policies to be applied to an entry. Create the class of service definition entry. The managed entry for the CoS is the account policy attribute, acctPolicySubentry . This example applies the CoS to the entire directory tree: 20.10.3. Disabling Accounts a Certain Amount of Time After Password Expiry Directory Server enables you to configure an account policy that disables an account a certain amount of time after the password expired. Disables accounts are no longer able to log in. To set up this configuration, follow the procedure in Section 20.10.2, "Account Inactivity and Account Expiration" . However, when configuring the plug-in configuration entry, use the following settings instead: This configuration uses a dummy value in the stateAttrName parameter. Therefore, only the passwordExpirationTime attribute set in the altStateAttrName parameter is used to calculate when an account is expired. To additionally record the time of the last successful login in the lastLoginTime attribute of the user entry, set: Using this configuration, an account is automatically disabled if the sum of the time set in the user's passwordExpirationTime attribute and in the accountInactivityLimit parameter's value is in the past. Using this configuration, an account is automatically disabled if the sum of the value in the user's passwordExpirationTime attribute and in the accountInactivityLimit parameter exceeds the time since the alwaysRecordLoginAttr attribute was last updated. 20.10.4. Tracking Login Times without Setting Lockout Policies It is also possible to use the Account Policy Plug-in to track user login times without setting an expiration time or inactivity period. In this case, the Account Policy Plug-in is used to add the lastLoginTime attribute to user entries, but no other policy rules need to be set. In that case, set up the Account Policy Plug-in as normal, to track login times. However, do not create a CoS to act on the login information that is being tracked. Enable the Account Policy Plug-in: Create the plug-in configuration entry to record login times: Set the alwaysRecordLogin value to yes so that every entry has a login time recorded. Set the lastLoginTime attribute as the attribute to use for the account policy ( stateattrname ). Set the attribute to use to show which entries have an account policy applied to them ( acctPolicySubentry ). Set the attribute in the account policy which is used to set the actual timeout period, in seconds ( accountInactivityLimit ). Restart the server to load the new plug-in configuration: 20.10.5. Unlocking Inactive Accounts If an account is locked because it reached the inactivity limit, you can reactivate it using one of the following methods: Using the dsidm utility: Manually by resetting the lastLoginTime attribute to a current time stamp: The lastLoginTime attribute stores its value in GMT/UTC time (Zulu time zone), indicated by the appended Z to the time stamp.
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin account-policy enable", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin account-policy set --config-entry=\" cn=config,cn=Account Policy Plugin,cn=plugins,cn=config \"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin account-policy config-entry set \"cn=config,cn=Account Policy Plugin,cn=plugins,cn=config\" --always-record-login yes --state-attr lastLoginTime --alt-state-attr 1.1 --spec-attr acctPolicySubentry --limit-attr accountInactivityLimit", "dsctl instance_name restart", "ldapadd -a -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=Account Inactivation Policy,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: accountpolicy accountInactivityLimit: 2592000 cn: Account Inactivation Policy", "ldapadd -a -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=TempltCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: cosTemplate acctPolicySubentry: cn=Account Inactivation Policy,dc=example,dc=com", "ldapadd -a -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=DefnCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectclass: cosSuperDefinition objectclass: cosPointerDefinition cosTemplateDn: cn=TempltCoS,dc=example,dc=com cosAttribute: acctPolicySubentry default operational-default", "dn: cn=config,cn=Account Policy Plugin,cn=plugins,cn=config objectClass: top objectClass: extensibleObject cn: config alwaysrecordlogin: yes stateAttrName: non_existent_attribute altStateAttrName: passwordExpirationTime specattrname: acctPolicySubentry limitattrname: accountInactivityLimit", "dn: cn=config,cn=Account Policy Plugin,cn=plugins,cn=config alwaysRecordLoginAttr: lastLoginTime", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin account-policy enable", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin account-policy config-entry set \"cn=config,cn=Account Policy Plugin,cn=plugins,cn=config\" --always-record-login yes --state-attr lastLoginTime --alt-state-attr createTimestamp --spec-attr acctPolicySubentry --limit-attr accountInactivityLimit", "dsctl instance_name restart", "dsidm -D \"cn=Directory Manager\" ldap://server.example.com -b \" dc=example,dc=com \" account unlock \" uid=example \"", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: uid= example ,ou=people,dc=example,dc=com changetype: modify replace: lastLoginTime lastLoginTime: 20210901000000Z" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/account-policy-plugin
Chapter 19. API reference
Chapter 19. API reference 19.1. 5.6 Logging API reference 19.1.1. Logging 5.6 API reference 19.1.1.1. ClusterLogForwarder ClusterLogForwarder is an API to configure forwarding logs. You configure forwarding by specifying a list of pipelines , which forward from a set of named inputs to a set of named outputs. There are built-in input names for common log categories, and you can define custom inputs to do additional filtering. There is a built-in output name for the default openshift log store, but you can define your own outputs with a URL and other connection information to forward logs to other stores or processors, inside or outside the cluster. For more details see the documentation on the API fields. Property Type Description spec object Specification of the desired behavior of ClusterLogForwarder status object Status of the ClusterLogForwarder 19.1.1.1.1. .spec 19.1.1.1.1.1. Description ClusterLogForwarderSpec defines how logs should be forwarded to remote targets. 19.1.1.1.1.1.1. Type object Property Type Description inputs array (optional) Inputs are named filters for log messages to be forwarded. outputDefaults object (optional) DEPRECATED OutputDefaults specify forwarder config explicitly for the default store. outputs array (optional) Outputs are named destinations for log messages. pipelines array Pipelines forward the messages selected by a set of inputs to a set of outputs. 19.1.1.1.2. .spec.inputs[] 19.1.1.1.2.1. Description InputSpec defines a selector of log messages. 19.1.1.1.2.1.1. Type array Property Type Description application object (optional) Application, if present, enables named set of application logs that name string Name used to refer to the input of a pipeline . 19.1.1.1.3. .spec.inputs[].application 19.1.1.1.3.1. Description Application log selector. All conditions in the selector must be satisfied (logical AND) to select logs. 19.1.1.1.3.1.1. Type object Property Type Description namespaces array (optional) Namespaces from which to collect application logs. selector object (optional) Selector for logs from pods with matching labels. 19.1.1.1.4. .spec.inputs[].application.namespaces[] 19.1.1.1.4.1. Description 19.1.1.1.4.1.1. Type array 19.1.1.1.5. .spec.inputs[].application.selector 19.1.1.1.5.1. Description A label selector is a label query over a set of resources. 19.1.1.1.5.1.1. Type object Property Type Description matchLabels object (optional) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels 19.1.1.1.6. .spec.inputs[].application.selector.matchLabels 19.1.1.1.6.1. Description 19.1.1.1.6.1.1. Type object 19.1.1.1.7. .spec.outputDefaults 19.1.1.1.7.1. Description 19.1.1.1.7.1.1. Type object Property Type Description elasticsearch object (optional) Elasticsearch OutputSpec default values 19.1.1.1.8. .spec.outputDefaults.elasticsearch 19.1.1.1.8.1. Description ElasticsearchStructuredSpec is spec related to structured log changes to determine the elasticsearch index 19.1.1.1.8.1.1. Type object Property Type Description enableStructuredContainerLogs bool (optional) EnableStructuredContainerLogs enables multi-container structured logs to allow structuredTypeKey string (optional) StructuredTypeKey specifies the metadata key to be used as name of elasticsearch index structuredTypeName string (optional) StructuredTypeName specifies the name of elasticsearch schema 19.1.1.1.9. .spec.outputs[] 19.1.1.1.9.1. Description Output defines a destination for log messages. 19.1.1.1.9.1.1. Type array Property Type Description syslog object (optional) fluentdForward object (optional) elasticsearch object (optional) kafka object (optional) cloudwatch object (optional) loki object (optional) googleCloudLogging object (optional) splunk object (optional) name string Name used to refer to the output from a pipeline . secret object (optional) Secret for authentication. tls object TLS contains settings for controlling options on TLS client connections. type string Type of output plugin. url string (optional) URL to send log records to. 19.1.1.1.10. .spec.outputs[].secret 19.1.1.1.10.1. Description OutputSecretSpec is a secret reference containing name only, no namespace. 19.1.1.1.10.1.1. Type object Property Type Description name string Name of a secret in the namespace configured for log forwarder secrets. 19.1.1.1.11. .spec.outputs[].tls 19.1.1.1.11.1. Description OutputTLSSpec contains options for TLS connections that are agnostic to the output type. 19.1.1.1.11.1.1. Type object Property Type Description insecureSkipVerify bool If InsecureSkipVerify is true, then the TLS client will be configured to ignore errors with certificates. 19.1.1.1.12. .spec.pipelines[] 19.1.1.1.12.1. Description PipelinesSpec link a set of inputs to a set of outputs. 19.1.1.1.12.1.1. Type array Property Type Description detectMultilineErrors bool (optional) DetectMultilineErrors enables multiline error detection of container logs inputRefs array InputRefs lists the names ( input.name ) of inputs to this pipeline. labels object (optional) Labels applied to log records passing through this pipeline. name string (optional) Name is optional, but must be unique in the pipelines list if provided. outputRefs array OutputRefs lists the names ( output.name ) of outputs from this pipeline. parse string (optional) Parse enables parsing of log entries into structured logs 19.1.1.1.13. .spec.pipelines[].inputRefs[] 19.1.1.1.13.1. Description 19.1.1.1.13.1.1. Type array 19.1.1.1.14. .spec.pipelines[].labels 19.1.1.1.14.1. Description 19.1.1.1.14.1.1. Type object 19.1.1.1.15. .spec.pipelines[].outputRefs[] 19.1.1.1.15.1. Description 19.1.1.1.15.1.1. Type array 19.1.1.1.16. .status 19.1.1.1.16.1. Description ClusterLogForwarderStatus defines the observed state of ClusterLogForwarder 19.1.1.1.16.1.1. Type object Property Type Description conditions object Conditions of the log forwarder. inputs Conditions Inputs maps input name to condition of the input. outputs Conditions Outputs maps output name to condition of the output. pipelines Conditions Pipelines maps pipeline name to condition of the pipeline. 19.1.1.1.17. .status.conditions 19.1.1.1.17.1. Description 19.1.1.1.17.1.1. Type object 19.1.1.1.18. .status.inputs 19.1.1.1.18.1. Description 19.1.1.1.18.1.1. Type Conditions 19.1.1.1.19. .status.outputs 19.1.1.1.19.1. Description 19.1.1.1.19.1.1. Type Conditions 19.1.1.1.20. .status.pipelines 19.1.1.1.20.1. Description 19.1.1.1.20.1.1. Type Conditions== ClusterLogging A Red Hat OpenShift Logging instance. ClusterLogging is the Schema for the clusterloggings API Property Type Description spec object Specification of the desired behavior of ClusterLogging status object Status defines the observed state of ClusterLogging 19.1.1.1.21. .spec 19.1.1.1.21.1. Description ClusterLoggingSpec defines the desired state of ClusterLogging 19.1.1.1.21.1.1. Type object Property Type Description collection object Specification of the Collection component for the cluster curation object (DEPRECATED) (optional) Deprecated. Specification of the Curation component for the cluster forwarder object (DEPRECATED) (optional) Deprecated. Specification for Forwarder component for the cluster logStore object (optional) Specification of the Log Storage component for the cluster managementState string (optional) Indicator if the resource is 'Managed' or 'Unmanaged' by the operator visualization object (optional) Specification of the Visualization component for the cluster 19.1.1.1.22. .spec.collection 19.1.1.1.22.1. Description This is the struct that will contain information pertinent to Log and event collection 19.1.1.1.22.1.1. Type object Property Type Description resources object (optional) The resource requirements for the collector nodeSelector object (optional) Define which Nodes the Pods are scheduled on. tolerations array (optional) Define the tolerations the Pods will accept fluentd object (optional) Fluentd represents the configuration for forwarders of type fluentd. logs object (DEPRECATED) (optional) Deprecated. Specification of Log Collection for the cluster type string (optional) The type of Log Collection to configure 19.1.1.1.23. .spec.collection.fluentd 19.1.1.1.23.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 19.1.1.1.23.1.1. Type object Property Type Description buffer object inFile object 19.1.1.1.24. .spec.collection.fluentd.buffer 19.1.1.1.24.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 19.1.1.1.24.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount represents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 19.1.1.1.25. .spec.collection.fluentd.inFile 19.1.1.1.25.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 19.1.1.1.25.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 19.1.1.1.26. .spec.collection.logs 19.1.1.1.26.1. Description 19.1.1.1.26.1.1. Type object Property Type Description fluentd object Specification of the Fluentd Log Collection component type string The type of Log Collection to configure 19.1.1.1.27. .spec.collection.logs.fluentd 19.1.1.1.27.1. Description CollectorSpec is spec to define scheduling and resources for a collector 19.1.1.1.27.1.1. Type object Property Type Description nodeSelector object (optional) Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for the collector tolerations array (optional) Define the tolerations the Pods will accept 19.1.1.1.28. .spec.collection.logs.fluentd.nodeSelector 19.1.1.1.28.1. Description 19.1.1.1.28.1.1. Type object 19.1.1.1.29. .spec.collection.logs.fluentd.resources 19.1.1.1.29.1. Description 19.1.1.1.29.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.30. .spec.collection.logs.fluentd.resources.limits 19.1.1.1.30.1. Description 19.1.1.1.30.1.1. Type object 19.1.1.1.31. .spec.collection.logs.fluentd.resources.requests 19.1.1.1.31.1. Description 19.1.1.1.31.1.1. Type object 19.1.1.1.32. .spec.collection.logs.fluentd.tolerations[] 19.1.1.1.32.1. Description 19.1.1.1.32.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.33. .spec.collection.logs.fluentd.tolerations[].tolerationSeconds 19.1.1.1.33.1. Description 19.1.1.1.33.1.1. Type int 19.1.1.1.34. .spec.curation 19.1.1.1.34.1. Description This is the struct that will contain information pertinent to Log curation (Curator) 19.1.1.1.34.1.1. Type object Property Type Description curator object The specification of curation to configure type string The kind of curation to configure 19.1.1.1.35. .spec.curation.curator 19.1.1.1.35.1. Description 19.1.1.1.35.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for Curator schedule string The cron schedule that the Curator job is run. Defaults to "30 3 * * *" tolerations array 19.1.1.1.36. .spec.curation.curator.nodeSelector 19.1.1.1.36.1. Description 19.1.1.1.36.1.1. Type object 19.1.1.1.37. .spec.curation.curator.resources 19.1.1.1.37.1. Description 19.1.1.1.37.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.38. .spec.curation.curator.resources.limits 19.1.1.1.38.1. Description 19.1.1.1.38.1.1. Type object 19.1.1.1.39. .spec.curation.curator.resources.requests 19.1.1.1.39.1. Description 19.1.1.1.39.1.1. Type object 19.1.1.1.40. .spec.curation.curator.tolerations[] 19.1.1.1.40.1. Description 19.1.1.1.40.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.41. .spec.curation.curator.tolerations[].tolerationSeconds 19.1.1.1.41.1. Description 19.1.1.1.41.1.1. Type int 19.1.1.1.42. .spec.forwarder 19.1.1.1.42.1. Description ForwarderSpec contains global tuning parameters for specific forwarder implementations. This field is not required for general use, it allows performance tuning by users familiar with the underlying forwarder technology. Currently supported: fluentd . 19.1.1.1.42.1.1. Type object Property Type Description fluentd object 19.1.1.1.43. .spec.forwarder.fluentd 19.1.1.1.43.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 19.1.1.1.43.1.1. Type object Property Type Description buffer object inFile object 19.1.1.1.44. .spec.forwarder.fluentd.buffer 19.1.1.1.44.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 19.1.1.1.44.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 19.1.1.1.45. .spec.forwarder.fluentd.inFile 19.1.1.1.45.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 19.1.1.1.45.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 19.1.1.1.46. .spec.logStore 19.1.1.1.46.1. Description The LogStoreSpec contains information about how logs are stored. 19.1.1.1.46.1.1. Type object Property Type Description elasticsearch object Specification of the Elasticsearch Log Store component lokistack object LokiStack contains information about which LokiStack to use for log storage if Type is set to LogStoreTypeLokiStack. retentionPolicy object (optional) Retention policy defines the maximum age for an index after which it should be deleted type string The Type of Log Storage to configure. The operator currently supports either using ElasticSearch 19.1.1.1.47. .spec.logStore.elasticsearch 19.1.1.1.47.1. Description 19.1.1.1.47.1.1. Type object Property Type Description nodeCount int Number of nodes to deploy for Elasticsearch nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Elasticsearch Proxy component redundancyPolicy string (optional) resources object (optional) The resource requirements for Elasticsearch storage object (optional) The storage specification for Elasticsearch data nodes tolerations array 19.1.1.1.48. .spec.logStore.elasticsearch.nodeSelector 19.1.1.1.48.1. Description 19.1.1.1.48.1.1. Type object 19.1.1.1.49. .spec.logStore.elasticsearch.proxy 19.1.1.1.49.1. Description 19.1.1.1.49.1.1. Type object Property Type Description resources object 19.1.1.1.50. .spec.logStore.elasticsearch.proxy.resources 19.1.1.1.50.1. Description 19.1.1.1.50.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.51. .spec.logStore.elasticsearch.proxy.resources.limits 19.1.1.1.51.1. Description 19.1.1.1.51.1.1. Type object 19.1.1.1.52. .spec.logStore.elasticsearch.proxy.resources.requests 19.1.1.1.52.1. Description 19.1.1.1.52.1.1. Type object 19.1.1.1.53. .spec.logStore.elasticsearch.resources 19.1.1.1.53.1. Description 19.1.1.1.53.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.54. .spec.logStore.elasticsearch.resources.limits 19.1.1.1.54.1. Description 19.1.1.1.54.1.1. Type object 19.1.1.1.55. .spec.logStore.elasticsearch.resources.requests 19.1.1.1.55.1. Description 19.1.1.1.55.1.1. Type object 19.1.1.1.56. .spec.logStore.elasticsearch.storage 19.1.1.1.56.1. Description 19.1.1.1.56.1.1. Type object Property Type Description size object The max storage capacity for the node to provision. storageClassName string (optional) The name of the storage class to use with creating the node's PVC. 19.1.1.1.57. .spec.logStore.elasticsearch.storage.size 19.1.1.1.57.1. Description 19.1.1.1.57.1.1. Type object Property Type Description Format string Change Format at will. See the comment for Canonicalize for d object d is the quantity in inf.Dec form if d.Dec != nil i int i is the quantity in int64 scaled form, if d.Dec == nil s string s is the generated value of this quantity to avoid recalculation 19.1.1.1.58. .spec.logStore.elasticsearch.storage.size.d 19.1.1.1.58.1. Description 19.1.1.1.58.1.1. Type object Property Type Description Dec object 19.1.1.1.59. .spec.logStore.elasticsearch.storage.size.d.Dec 19.1.1.1.59.1. Description 19.1.1.1.59.1.1. Type object Property Type Description scale int unscaled object 19.1.1.1.60. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled 19.1.1.1.60.1. Description 19.1.1.1.60.1.1. Type object Property Type Description abs Word sign neg bool 19.1.1.1.61. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled.abs 19.1.1.1.61.1. Description 19.1.1.1.61.1.1. Type Word 19.1.1.1.62. .spec.logStore.elasticsearch.storage.size.i 19.1.1.1.62.1. Description 19.1.1.1.62.1.1. Type int Property Type Description scale int value int 19.1.1.1.63. .spec.logStore.elasticsearch.tolerations[] 19.1.1.1.63.1. Description 19.1.1.1.63.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.64. .spec.logStore.elasticsearch.tolerations[].tolerationSeconds 19.1.1.1.64.1. Description 19.1.1.1.64.1.1. Type int 19.1.1.1.65. .spec.logStore.lokistack 19.1.1.1.65.1. Description LokiStackStoreSpec is used to set up cluster-logging to use a LokiStack as logging storage. It points to an existing LokiStack in the same namespace. 19.1.1.1.65.1.1. Type object Property Type Description name string Name of the LokiStack resource. 19.1.1.1.66. .spec.logStore.retentionPolicy 19.1.1.1.66.1. Description 19.1.1.1.66.1.1. Type object Property Type Description application object audit object infra object 19.1.1.1.67. .spec.logStore.retentionPolicy.application 19.1.1.1.67.1. Description 19.1.1.1.67.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 19.1.1.1.68. .spec.logStore.retentionPolicy.application.namespaceSpec[] 19.1.1.1.68.1. Description 19.1.1.1.68.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 19.1.1.1.69. .spec.logStore.retentionPolicy.audit 19.1.1.1.69.1. Description 19.1.1.1.69.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 19.1.1.1.70. .spec.logStore.retentionPolicy.audit.namespaceSpec[] 19.1.1.1.70.1. Description 19.1.1.1.70.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 19.1.1.1.71. .spec.logStore.retentionPolicy.infra 19.1.1.1.71.1. Description 19.1.1.1.71.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 19.1.1.1.72. .spec.logStore.retentionPolicy.infra.namespaceSpec[] 19.1.1.1.72.1. Description 19.1.1.1.72.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 19.1.1.1.73. .spec.visualization 19.1.1.1.73.1. Description This is the struct that will contain information pertinent to Log visualization (Kibana) 19.1.1.1.73.1.1. Type object Property Type Description kibana object Specification of the Kibana Visualization component type string The type of Visualization to configure 19.1.1.1.74. .spec.visualization.kibana 19.1.1.1.74.1. Description 19.1.1.1.74.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Kibana Proxy component replicas int Number of instances to deploy for a Kibana deployment resources object (optional) The resource requirements for Kibana tolerations array 19.1.1.1.75. .spec.visualization.kibana.nodeSelector 19.1.1.1.75.1. Description 19.1.1.1.75.1.1. Type object 19.1.1.1.76. .spec.visualization.kibana.proxy 19.1.1.1.76.1. Description 19.1.1.1.76.1.1. Type object Property Type Description resources object 19.1.1.1.77. .spec.visualization.kibana.proxy.resources 19.1.1.1.77.1. Description 19.1.1.1.77.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.78. .spec.visualization.kibana.proxy.resources.limits 19.1.1.1.78.1. Description 19.1.1.1.78.1.1. Type object 19.1.1.1.79. .spec.visualization.kibana.proxy.resources.requests 19.1.1.1.79.1. Description 19.1.1.1.79.1.1. Type object 19.1.1.1.80. .spec.visualization.kibana.replicas 19.1.1.1.80.1. Description 19.1.1.1.80.1.1. Type int 19.1.1.1.81. .spec.visualization.kibana.resources 19.1.1.1.81.1. Description 19.1.1.1.81.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.82. .spec.visualization.kibana.resources.limits 19.1.1.1.82.1. Description 19.1.1.1.82.1.1. Type object 19.1.1.1.83. .spec.visualization.kibana.resources.requests 19.1.1.1.83.1. Description 19.1.1.1.83.1.1. Type object 19.1.1.1.84. .spec.visualization.kibana.tolerations[] 19.1.1.1.84.1. Description 19.1.1.1.84.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.85. .spec.visualization.kibana.tolerations[].tolerationSeconds 19.1.1.1.85.1. Description 19.1.1.1.85.1.1. Type int 19.1.1.1.86. .status 19.1.1.1.86.1. Description ClusterLoggingStatus defines the observed state of ClusterLogging 19.1.1.1.86.1.1. Type object Property Type Description collection object (optional) conditions object (optional) curation object (optional) logStore object (optional) visualization object (optional) 19.1.1.1.87. .status.collection 19.1.1.1.87.1. Description 19.1.1.1.87.1.1. Type object Property Type Description logs object (optional) 19.1.1.1.88. .status.collection.logs 19.1.1.1.88.1. Description 19.1.1.1.88.1.1. Type object Property Type Description fluentdStatus object (optional) 19.1.1.1.89. .status.collection.logs.fluentdStatus 19.1.1.1.89.1. Description 19.1.1.1.89.1.1. Type object Property Type Description clusterCondition object (optional) daemonSet string (optional) nodes object (optional) pods string (optional) 19.1.1.1.90. .status.collection.logs.fluentdStatus.clusterCondition 19.1.1.1.90.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 19.1.1.1.90.1.1. Type object 19.1.1.1.91. .status.collection.logs.fluentdStatus.nodes 19.1.1.1.91.1. Description 19.1.1.1.91.1.1. Type object 19.1.1.1.92. .status.conditions 19.1.1.1.92.1. Description 19.1.1.1.92.1.1. Type object 19.1.1.1.93. .status.curation 19.1.1.1.93.1. Description 19.1.1.1.93.1.1. Type object Property Type Description curatorStatus array (optional) 19.1.1.1.94. .status.curation.curatorStatus[] 19.1.1.1.94.1. Description 19.1.1.1.94.1.1. Type array Property Type Description clusterCondition object (optional) cronJobs string (optional) schedules string (optional) suspended bool (optional) 19.1.1.1.95. .status.curation.curatorStatus[].clusterCondition 19.1.1.1.95.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 19.1.1.1.95.1.1. Type object 19.1.1.1.96. .status.logStore 19.1.1.1.96.1. Description 19.1.1.1.96.1.1. Type object Property Type Description elasticsearchStatus array (optional) 19.1.1.1.97. .status.logStore.elasticsearchStatus[] 19.1.1.1.97.1. Description 19.1.1.1.97.1.1. Type array Property Type Description cluster object (optional) clusterConditions object (optional) clusterHealth string (optional) clusterName string (optional) deployments array (optional) nodeConditions object (optional) nodeCount int (optional) pods object (optional) replicaSets array (optional) shardAllocationEnabled string (optional) statefulSets array (optional) 19.1.1.1.98. .status.logStore.elasticsearchStatus[].cluster 19.1.1.1.98.1. Description 19.1.1.1.98.1.1. Type object Property Type Description activePrimaryShards int The number of Active Primary Shards for the Elasticsearch Cluster activeShards int The number of Active Shards for the Elasticsearch Cluster initializingShards int The number of Initializing Shards for the Elasticsearch Cluster numDataNodes int The number of Data Nodes for the Elasticsearch Cluster numNodes int The number of Nodes for the Elasticsearch Cluster pendingTasks int relocatingShards int The number of Relocating Shards for the Elasticsearch Cluster status string The current Status of the Elasticsearch Cluster unassignedShards int The number of Unassigned Shards for the Elasticsearch Cluster 19.1.1.1.99. .status.logStore.elasticsearchStatus[].clusterConditions 19.1.1.1.99.1. Description 19.1.1.1.99.1.1. Type object 19.1.1.1.100. .status.logStore.elasticsearchStatus[].deployments[] 19.1.1.1.100.1. Description 19.1.1.1.100.1.1. Type array 19.1.1.1.101. .status.logStore.elasticsearchStatus[].nodeConditions 19.1.1.1.101.1. Description 19.1.1.1.101.1.1. Type object 19.1.1.1.102. .status.logStore.elasticsearchStatus[].pods 19.1.1.1.102.1. Description 19.1.1.1.102.1.1. Type object 19.1.1.1.103. .status.logStore.elasticsearchStatus[].replicaSets[] 19.1.1.1.103.1. Description 19.1.1.1.103.1.1. Type array 19.1.1.1.104. .status.logStore.elasticsearchStatus[].statefulSets[] 19.1.1.1.104.1. Description 19.1.1.1.104.1.1. Type array 19.1.1.1.105. .status.visualization 19.1.1.1.105.1. Description 19.1.1.1.105.1.1. Type object Property Type Description kibanaStatus array (optional) 19.1.1.1.106. .status.visualization.kibanaStatus[] 19.1.1.1.106.1. Description 19.1.1.1.106.1.1. Type array Property Type Description clusterCondition object (optional) deployment string (optional) pods string (optional) The status for each of the Kibana pods for the Visualization component replicaSets array (optional) replicas int (optional) 19.1.1.1.107. .status.visualization.kibanaStatus[].clusterCondition 19.1.1.1.107.1. Description 19.1.1.1.107.1.1. Type object 19.1.1.1.108. .status.visualization.kibanaStatus[].replicaSets[] 19.1.1.1.108.1. Description 19.1.1.1.108.1.1. Type array
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/logging/api-reference
Chapter 1. Importing data to Directory Server
Chapter 1. Importing data to Directory Server Import data from an LDIF file to a Directory Server database using the command line or the web console. Important To import data, you must store the LDIF file that you want to import in the /var/lib/dirsrv/slapd- instance_name /ldif/ directory. 1.1. Importing data using the command line while the server is running To import data while the Directory Server instance is running, use the dsconf backend import command. Warning When you start an import operation, Directory Server first removes all existing data from the database and, subsequently, imports the data from the LDIF file. Therefore, if the import fails, the server returns no entries or a partial set of entries. Prerequisites The LDIF file permissions allow the dirsrv user to read the file. The LDIF file to import contains the root suffix entry. The suffix and its database, to which you want to import data, exists in the directory. The Directory Server instance is running. The LDIF file to import uses UTF-8 character set encoding. Procedure Optional: By default, Directory Server sets the entry update sequence numbers (USNs) of all imported entries to 0 . To set an alternative initial USN value, set the nsslapd-entryusn-import-initval parameter. For example, to set USN for all imported values to 12345 , enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com config replace nsslapd-entryusn-import-initval= 12345 If you copied the file you want to import to /var/lib/dirsrv/slapd- instance_name /ldif/ , reset the SELinux context on that file: # restorecon -Rv /var/lib/dirsrv/slapd-instance_name/ldif/example.ldif Use the dsconf backend import command to import data from an LDIF file. For example, to import the /var/lib/dirsrv/slapd- instance_name /ldif/example.ldif file into the userRoot database: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend import userRoot /var/lib/dirsrv/slapd-instance_name/ldif/example.ldif The import task has finished successfully Search the /var/log/dirsrv/slapd- instance_name /errors log for problems during the import. Verification Search for entries under the imported suffix, for example dc=example,dc=com : # ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -b " dc=example,dc=com " -s sub -x Additional resources Storing suffixes in separate databases nsslapd-entryusn-import-initval 1.2. Importing data using the command line while the server is offline If the Directory Server instance is offline, use the dsctl ldif2db command to import data. Warning When you start an import operation, Directory Server first removes all existing data from the database and, subsequently, imports the data from the LDIF file. Therefore, if the import fails, the server returns no entries or a partial set of entries. Prerequisites The LDIF file permissions allow the dirsrv user to read the file. The LDIF file to import contains the root suffix entry. The suffix and its database, to which you want to import data, exists in the directory. The Directory Server instance is not running. The LDIF file to import uses UTF-8 character set encoding. Procedure Optional: By default, Directory Server sets the entry update sequence numbers (USNs) of all imported entries to 0 . To set an alternative initial USN value, set the nsslapd-entryusn-import-initval parameter. For example, to set USN for all imported values to 12345 , enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com config replace nsslapd-entryusn-import-initval= 12345 If you copied the file you want to import to /var/lib/dirsrv/slapd- instance_name /ldif/ , reset the SELinux context on that file: # restorecon -Rv /var/lib/dirsrv/slapd-instance_name/ldif/example.ldif Use the dsctl ldif2db command to import data from an LDIF file. For example, to import the /var/lib/dirsrv/slapd- instance_name /ldif/example.ldif file into the userRoot database: # dsctl instance_name ldif2db userRoot /var/lib/dirsrv/slapd-instance_name/ldif/example.ldif OK group dirsrv exists OK user dirsrv exists [17/Jul/2021:13:42:42.015554231 +0200] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 ... [17/Jul/2021:13:42:44.302630629 +0200] - INFO - import_main_offline - import userRoot: Import complete. Processed 160 entries in 2 seconds. (80.00 entries/sec) ldif2db successful Search the /var/log/dirsrv/slapd- instance_name /errors log for problems during the import. Optional: Start the instance: # dsctl instance_name start Verification Search for entries under the imported suffix, for example dc=example,dc=com : # ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -b " dc=example,dc=com " -s sub -x Additional resources Storing suffixes in separate databases nsslapd-entryusn-import-initval To display all additional settings that you can use to import data, see the output of the dsctl ldif2db --help command. 1.3. Importing data using the web console while the server is running Directory Server supports importing data using the web console. Warning When you start an import operation, Directory Server first removes all existing data from the database and, subsequently, imports the data from the LDIF file. Therefore, if the import fails, the server returns no entries or a partial set of entries. Prerequisites The LDIF file permissions allow the dirsrv user to read the file. The LDIF file to import contains the root suffix entry. The suffix and its database, to which you want to import data, exists in the directory. The LDIF file is stored in the /var/lib/dirsrv/slapd- instance_name /ldif/ directory and has the dirsrv_var_lib_t SELinux context set. The Directory Server instance is running. You are logged in to the instance in the web console. The LDIF file to import uses UTF-8 character set encoding. Procedure In the web console, open the Database menu. Select the suffix entry. Click Suffix Tasks , and select Initialize Suffix . Click the Import button to the LDIF file you want to import. If the LDIF file is stored in a directory different than /var/lib/dirsrv/slapd- instance_name /ldif/ , enter the full path to the file and click the Import button. Select Yes, I am sure , and click Initialize Database to confirm. To check the log for problems during the import, open the Monitoring Logging Errors Log menu. Verification Search for entries under the imported suffix, for example dc=example,dc=com : # ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -b " dc=example,dc=com " -s sub -x Additional resources Storing suffixes in separate databases
[ "dsconf -D \" cn=Directory Manager \" ldap://server.example.com config replace nsslapd-entryusn-import-initval= 12345", "restorecon -Rv /var/lib/dirsrv/slapd-instance_name/ldif/example.ldif", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend import userRoot /var/lib/dirsrv/slapd-instance_name/ldif/example.ldif The import task has finished successfully", "ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -b \" dc=example,dc=com \" -s sub -x", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com config replace nsslapd-entryusn-import-initval= 12345", "restorecon -Rv /var/lib/dirsrv/slapd-instance_name/ldif/example.ldif", "dsctl instance_name ldif2db userRoot /var/lib/dirsrv/slapd-instance_name/ldif/example.ldif OK group dirsrv exists OK user dirsrv exists [17/Jul/2021:13:42:42.015554231 +0200] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 [17/Jul/2021:13:42:44.302630629 +0200] - INFO - import_main_offline - import userRoot: Import complete. Processed 160 entries in 2 seconds. (80.00 entries/sec) ldif2db successful", "dsctl instance_name start", "ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -b \" dc=example,dc=com \" -s sub -x", "ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -b \" dc=example,dc=com \" -s sub -x" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/importing_and_exporting_data/importing-data-to-directory-server_importing-and-exporting-data
Chapter 5. Troubleshooting
Chapter 5. Troubleshooting Troubleshoot and solve package-related issues after the in-place upgrade from RHEL 6.10 to RHEL 7.9. 5.1. Troubleshooting resources You can refer to the following troubleshooting resources. Console Output By default, only error and critical log level messages are printed to the console output by the Pre-upgrade Assistant. To also print debug, info, and warning messages, use the --debug option with the redhat-upgrade-tool command. Logs The /var/log/upgrade.log file lists issues found during the upgrade phase. Reports The /root/preupgrade/result.html file lists issues found during the pre-upgrade phase. This report is also available in the web console. For more information, see Assessing upgrade suitability from a web UI . 5.2. Fixing dependency errors During an in-place upgrade, certain packages might be installed without some of their dependencies. Procedure Identify dependencies errors: If the command displays no output, no further actions are required. To fix dependency errors, reinstall the affected packages. During this operation, the yum utility automatically installs missing dependencies. If the required dependencies are not provided by repositories available on the system, install those packages manually. 5.3. Installing missing packages Certain packages might be missing after the upgrade from RHEL 6 to RHEL 7. This problem can occur for several reasons: You did not provide a repository to the Red Hat Upgrade Tool that contained these packages. Install missing packages manually. Certain problems are preventing some RPMs from being installed. Resolve these problems before installing missing packages. You are missing NetworkManager because the service was not configured and running before the upgrade. Install and configure NetworkManager manually. For more information, see Getting started with NetworkManager . Procedure Review which packages are missing from your RHEL 7 system using one of the following methods: Review the pre-upgrade report. Run the following command to generate a list of expected packages in RHEL 7 and compare with the packages that are currently installed to determine which packages are missing. Install missing packages using one of the following methods: Locate and install all missing packages at once. This is the quickest method of getting all missing packages. If you know that you want to install only some of the missing packages, install each package individually. Note For further details about other files with lists of packages you should install on the upgraded system, see the /root/preupgrade/kickstart/README file and the pre-upgrade report. 5.4. Known issues The following are issues known to occur when upgrading from RHEL 6 to RHEL 7: In-place upgrade from a RHEL 6 system to RHEL 7 is impossible with FIPS mode enabled In-place upgrade on IBM Z fails and causes a data loss if the LDL format is used The Preupgrade Assistant reports notchecked if certain packages are missing on the system redhat-upgrade-tool fails to reconfigure the network interfaces, preventing the upgrade to happen redhat-upgrade-tool fails to reconfigure the static routes on the network interfaces, preventing the upgrade to happen Why does Red Hat Enterprise Linux 6 to 7 in-place upgrade fail if /usr is on separate partition? Systems on the IBM Power, big endian architecture that use multipath volumes might experience issues during the in-place upgrade, causing the upgraded system to fail to boot. To prevent this issue, do not perform the in-place upgrade on such systems. (BZ # 1704283 ) If the target RHEL 7 repository contains the kernel-3.10.0-1160.62.1.el7 package or newer, the upgrade fails. This results in a broken system in an unbootable state. To prevent this issue, use the RHEL 7.9 GA repository without z-stream updates or ensure that the RHEL 7.9 kernel inside the repository is older than the kernel-3.10.0-1160.62.1.el7 package. ( RHEL-3292 ) 5.5. Rolling back the upgrade If the in-place upgrade to RHEL 7 is unsuccessful, it is possible to get the RHEL 6 working system back in limited configurations using one of the following methods: The rollback capability integrated in the Red Hat Upgrade Tool. For more information, see Rollbacks and cleanup after upgrading RHEL 6 to RHEL 7 . A custom backup and recovery solution, for example, the Relax-and-Recover (ReaR) utility. For more information, see the ReaR documentation and What is Relax and Recover (ReaR) and how can I use it for disaster recovery? .
[ "yum check dependencies", "/root/preupgrade/kickstart/RHRHEL7rpmlist* | grep -v \"#\" | cut -d \"|\" -f 3 | sort | uniq", "cd /root/preupgrade bash noauto_postupgrade.d/install_rpmlist.sh kickstart/RHRHEL7rpmlist_kept", "yum install package" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/upgrading_from_rhel_6_to_rhel_7/troubleshooting-rhel-6-to-rhel-7_upgrading-from-rhel-6-to-rhel-7
Authentication
Authentication Red Hat Developer Hub 1.3 Configuring authentication to external services in Red Hat Developer Hub Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/authentication/index
Chapter 7. Configuring SSSD
Chapter 7. Configuring SSSD 7.1. Introduction to SSSD 7.1.1. How SSSD Works The System Security Services Daemon (SSSD) is a system service to access remote directories and authentication mechanisms. It connects a local system (an SSSD client ) to an external back-end system (a provider ). This provides the SSSD client with access to identity and authentication remote services using an SSSD provider. For example, these remote services include: an LDAP directory, an Identity Management (IdM) or Active Directory (AD) domain, or a Kerberos realm. For this purpose, SSSD: Connects the client to an identity store to retrieve authentication information. Uses the obtained authentication information to create a local cache of users and credentials on the client. Users on the local system are then able to authenticate using the user accounts stored in the external back-end system. SSSD does not create user accounts on the local system. Instead, it uses the identities from the external data store and lets the users access the local system. Figure 7.1. How SSSD works SSSD can also provide caches for several system services, such as Name Service Switch (NSS) or Pluggable Authentication Modules (PAM). 7.1.2. Benefits of Using SSSD Reduced load on identity and authentication servers When requesting information, SSSD clients contact SSSD, which checks its cache. SSSD contacts the servers only if the information is not available in the cache. Offline authentication SSSD optionally keeps a cache of user identities and credentials retrieved from remote services. In this setup, users can successfully authenticate to resources even if the remote server or the SSSD client are offline. A single user account: improved consistency of the authentication process With SSSD, it is not necessary to maintain both a central account and a local user account for offline authentication. Remote users often have multiple user accounts. For example, to connect to a virtual private network (VPN), remote users have one account for the local system and another account for the VPN system. Thanks to caching and offline authentication, remote users can connect to network resources simply by authenticating to their local machine. SSSD then maintains their network credentials.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/sssd
Chapter 10. Predictive Model Markup Language (PMML)
Chapter 10. Predictive Model Markup Language (PMML) Predictive Model Markup Language (PMML) is an XML-based standard established by the Data Mining Group (DMG) for defining statistical and data-mining models. PMML models can be shared between PMML-compliant platforms and across organizations so that business analysts and developers are unified in designing, analyzing, and implementing PMML-based assets and services. For more information about the background and applications of PMML, see the DMG PMML specification . 10.1. PMML conformance levels The PMML specification defines producer and consumer conformance levels in a software implementation to ensure that PMML models are created and integrated reliably. For the formal definitions of each conformance level, see the DMG PMML conformance page. The following list summarizes the PMML conformance levels: Producer conformance A tool or application is producer conforming if it generates valid PMML documents for at least one type of model. Satisfying PMML producer conformance requirements ensures that a model definition document is syntactically correct and defines a model instance that is consistent with semantic criteria that are defined in model specifications. Consumer conformance An application is consumer conforming if it accepts valid PMML documents for at least one type of model. Satisfying consumer conformance requirements ensures that a PMML model created according to producer conformance can be integrated and used as defined. For example, if an application is consumer conforming for Regression model types, then valid PMML documents defining models of this type produced by different conforming producers would be interchangeable in the application. Red Hat Process Automation Manager includes consumer conformance support for the following PMML model types: Regression models Scorecard models Tree models Mining models (with sub-types modelChain , selectAll , and selectFirst ) Clustering models For a list of all PMML model types, including those not supported in Red Hat Process Automation Manager, see the DMG PMML specification .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/pmml-con_pmml-models
Chapter 5. Uninstalling OpenShift Data Foundation
Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_google_cloud/uninstalling_openshift_data_foundation
20.5.2. Port Forwarding
20.5.2. Port Forwarding SSH can secure otherwise insecure TCP/IP protocols via port forwarding. When using this technique, the SSH server becomes an encrypted conduit to the SSH client. Port forwarding works by mapping a local port on the client to a remote port on the server. SSH can map any port from the server to any port on the client; port numbers do not need to match for this technique to work. To create a TCP/IP port forwarding channel which listens for connections on the localhost, use the following command: Note Setting up port forwarding to listen on ports below 1024 requires root level access. To check email on a server called mail.example.com using POP3 through an encrypted connection, use the following command: Once the port forwarding channel is in place between the client machine and the mail server, direct a POP3 mail client to use port 1100 on the localhost to check for new mail. Any requests sent to port 1100 on the client system are directed securely to the mail.example.com server. If mail.example.com is not running an SSH server, but another machine on the same network is, SSH can still be used to secure part of the connection. However, a slightly different command is necessary: In this example, POP3 requests from port 1100 on the client machine are forwarded through the SSH connection on port 22 to the SSH server, other.example.com . Then, other.example.com connects to port 110 on mail.example.com to check for new mail. Note, when using this technique only the connection between the client system and other.example.com SSH server is secure. Port forwarding can also be used to get information securely through network firewalls. If the firewall is configured to allow SSH traffic via its standard port (22) but blocks access to other ports, a connection between two hosts using the blocked ports is still possible by redirecting their communication over an established SSH connection. Note Using port forwarding to forward connections in this manner allows any user on the client system to connect to that service. If the client system becomes compromised, the attacker also has access to forwarded services. System administrators concerned about port forwarding can disable this functionality on the server by specifying a No parameter for the AllowTcpForwarding line in /etc/ssh/sshd_config and restarting the sshd service.
[ "ssh -L local-port : remote-hostname : remote-port username @ hostname", "ssh -L 1100:mail.example.com:110 mail.example.com", "ssh -L 1100:mail.example.com:110 other.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-ssh-beyondshell-tcpip
Chapter 7. References
Chapter 7. References This chapter contains reference information for virt-v2v . 7.1. virt-v2v Parameters The following parameters can be used with virt-v2v : -i input Specifies the input method to obtain the guest for conversion. The default is libvirt. Supported options are: libvirt Guest argument is the name of a libvirt domain. libvirtxml Guest argument is the path to an XML file containing a libvirt domain. -ic URI Specifies the connection to use with the libvirt input method. If omitted, this defaults to qemu:///system . Note, this only works when virt-v2v is run as root. virt-v2v can currently automatically obtain guest storage from local libvirt connections, ESX / ESX(i) connections, and connections over SSH. Other types of connection are not supported. -o method Specifies the output method. If no output method is specified, the default is libvirt. Supported output methods are: libvirt Create a libvirt guest. See the -oc and -os options. -os must be specified for the libvirt output method. rhev Create a guest on a Red Hat Enterprise Virtualization export storage domain, which can later be imported using the Manager. The export storage domain must be specified using -os for the rhev output method. -oc URI Specifies the libvirt connection to use to create the converted guest. If omitted, this defaults to qemu:///system if virt-v2v is run as root. Note that virt-v2v must be able to write directly to storage described by this libvirt connection. This makes writing to a remote connection impractical at present. -os storage Specifies the location where new storage will be created for the converted guest. This is dependent on the output method, specified by the -o parameter. For the libvirt output method, this must be the name of a storage pool. For the rhev output method, this specifies the NFS path to a Red Hat Enterprise Virtualization export storage domain. Note that the storage domain must have been previously initialized by the Red Hat Enterprise Virtualization Manager. The domain must be in the format < host >:< path >, for example, rhev-storage.example.com:/rhev/export . The NFS export must be mountable and writable by the host running virt-v2v . -op pool (deprecated) This parameter is still supported, but is deprecated in favor of -os . -osd domain (deprecated) This parameter is still supported, but is deprecated in favor of -os . -of format Specifies the on-disk format which will be used for the converted guest. Currently supported options are raw and qcow2 . The output format does not need to be the same as the source format - virt-v2v can convert from raw to qcow2 and vice versa. If not specified, the converted guest will use the same format as the source guest. -oa allocation Specifies whether the converted guest should use sparse or preallocated storage. The allocation scheme does not need to be the same as the source scheme: virt-v2v can convert from sparse to preallocated and vice versa. If not specified, the converted guest will use the same allocation scheme as the source. -on outputname Renames the guest. If this option is not used, then the output name is the same as the input name. -f file | --config file Load a virt-v2v configuration from file. Multiple configuration files can be specified; these will be searched in the order in which they are specified. If no configuration is specified, the defaults are /etc/virt-v2v.conf and /var/lib/virt-v2v/virt-v2v.db in that order. Important When overriding the default configuration details, we recommend also specifying /var/lib/virt-v2v/virt-v2v.db , as it contains default configuration data required for conversions. -n network | --network network Map all guest bridges or networks which do not have a mapping in the configuration file to the specified network. This option cannot be used in conjunction with --bridge . -b bridge | --bridge bridge Map all guest bridges or networks which do not have a mapping in the configuration file to the specified bridge. This option cannot be used in conjunction with --network . -p profile | --profile profile Use the default values for output method, output storage and network mappings from profile in the configuration file. --root= filesystem In a multi-boot virtual machine, select the root file system to be converted. The default value for this option is --root=ask . When this option is selected, virt-v2v lists the possible root file systems and asks the user which file system should be used. Warning In versions of Red Hat Enterprise Linux earlier than version 6.3, the default value was --root=single , which could cause virt-v2v to fail when a multi-boot virtual machine was detected. Other available options include: first Selects the first root device if multiple devices are detected. Since this is a heuristic, the choice may not always be correct. single Specifies that there is only one root device available to use. virt-v2v will fail if more than one device is detected. <path> Specifies a particular root device to use, for example, --root=/dev/sda2 would specify the second partition on the first hard drive. If the specified device does not exist or was not detected as a root device, virt-v2v will fail. --list-profiles Display a list of target profile names specified in the configuration file. --help Display brief help. --version Display version number and exit.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/chap-v2v_guide-references
Chapter 4. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1]
Chapter 4. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . webhooks array Webhooks is a list of webhooks and the affected resources and operations. webhooks[] object MutatingWebhook describes an admission webhook and the resources and operations it applies to. 4.1.1. .webhooks Description Webhooks is a list of webhooks and the affected resources and operations. Type array 4.1.2. .webhooks[] Description MutatingWebhook describes an admission webhook and the resources and operations it applies to. Type object Required name clientConfig sideEffects admissionReviewVersions Property Type Description admissionReviewVersions array (string) AdmissionReviewVersions is an ordered list of preferred AdmissionReview versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy. clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook failurePolicy string FailurePolicy defines how unrecognized errors from the admission endpoint are handled - allowed values are Ignore or Fail. Defaults to Fail. Possible enum values: - "Fail" means that an error calling the webhook causes the admission to fail. - "Ignore" means that an error calling the webhook is ignored. matchConditions array MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped. 2. If ALL matchConditions evaluate to TRUE, the webhook is called. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the error is ignored and the webhook is skipped matchConditions[] object MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. matchPolicy string matchPolicy defines how the "rules" list is used to match incoming requests. Allowed values are "Exact" or "Equivalent". - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the webhook. - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the webhook. Defaults to "Equivalent" Possible enum values: - "Equivalent" means requests should be sent to the webhook if they modify a resource listed in rules via another API group or version. - "Exact" means requests should only be sent to the webhook if they exactly match a given rule. name string The name of the admission webhook. Name should be fully qualified, e.g., imagepolicy.kubernetes.io, where "imagepolicy" is the name of the webhook, and kubernetes.io is the name of the organization. Required. namespaceSelector LabelSelector NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the webhook. For example, to run the webhook on any objects whose namespace is not associated with "runlevel" of "0" or "1"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "runlevel", "operator": "NotIn", "values": [ "0", "1" ] } ] } If instead you want to only run the webhook on any objects whose namespace is associated with the "environment" of "prod" or "staging"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "environment", "operator": "In", "values": [ "prod", "staging" ] } ] } See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ for more examples of label selectors. Default to the empty LabelSelector, which matches everything. objectSelector LabelSelector ObjectSelector decides whether to run the webhook based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything. reinvocationPolicy string reinvocationPolicy indicates whether this webhook should be called multiple times as part of a single admission evaluation. Allowed values are "Never" and "IfNeeded". Never: the webhook will not be called more than once in a single admission evaluation. IfNeeded: the webhook will be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. Webhooks that specify this option must be idempotent, able to process objects they previously admitted. Note: * the number of additional invocations is not guaranteed to be exactly one. * if additional invocations result in further modifications to the object, webhooks are not guaranteed to be invoked again. * webhooks that use this option may be reordered to minimize the number of additional invocations. * to validate an object after all mutations are guaranteed complete, use a validating admission webhook instead. Defaults to "Never". Possible enum values: - "IfNeeded" indicates that the webhook may be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. - "Never" indicates that the webhook must not be called more than once in a single admission evaluation. rules array Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. rules[] object RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. sideEffects string SideEffects states whether this webhook has side effects. Acceptable values are: None, NoneOnDryRun (webhooks created via v1beta1 may also specify Some or Unknown). Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. Possible enum values: - "None" means that calling the webhook will have no side effects. - "NoneOnDryRun" means that calling the webhook will possibly have side effects, but if the request being reviewed has the dry-run attribute, the side effects will be suppressed. - "Some" means that calling the webhook will possibly have side effects. If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail. - "Unknown" means that no information is known about the side effects of calling the webhook. If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail. timeoutSeconds integer TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds. 4.1.3. .webhooks[].clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 4.1.4. .webhooks[].clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path which will be sent in any request to this service. port integer If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. port should be a valid port number (1-65535, inclusive). 4.1.5. .webhooks[].matchConditions Description MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped. 2. If ALL matchConditions evaluate to TRUE, the webhook is called. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the error is ignored and the webhook is skipped Type array 4.1.6. .webhooks[].matchConditions[] Description MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. Type object Required name expression Property Type Description expression string Expression represents the expression which will be evaluated by CEL. Must evaluate to bool. CEL expressions have access to the contents of the AdmissionRequest and Authorizer, organized into CEL variables: 'object' - The object from the incoming request. The value is null for DELETE requests. 'oldObject' - The existing object. The value is null for CREATE requests. 'request' - Attributes of the admission request(/pkg/apis/admission/types.go#AdmissionRequest). 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the request resource. Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/ Required. name string Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', ' ' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9 .]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '/' (e.g. 'example.com/MyName') Required. 4.1.7. .webhooks[].rules Description Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. Type array 4.1.8. .webhooks[].rules[] Description RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 4.2. API endpoints The following API endpoints are available: /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations DELETE : delete collection of MutatingWebhookConfiguration GET : list or watch objects of kind MutatingWebhookConfiguration POST : create a MutatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations GET : watch individual changes to a list of MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} DELETE : delete a MutatingWebhookConfiguration GET : read the specified MutatingWebhookConfiguration PATCH : partially update the specified MutatingWebhookConfiguration PUT : replace the specified MutatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations/{name} GET : watch changes to an object of kind MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations HTTP method DELETE Description delete collection of MutatingWebhookConfiguration Table 4.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind MutatingWebhookConfiguration Table 4.3. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a MutatingWebhookConfiguration Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body MutatingWebhookConfiguration schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 202 - Accepted MutatingWebhookConfiguration schema 401 - Unauthorized Empty 4.2.2. /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations HTTP method GET Description watch individual changes to a list of MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. Table 4.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} Table 4.8. Global path parameters Parameter Type Description name string name of the MutatingWebhookConfiguration HTTP method DELETE Description delete a MutatingWebhookConfiguration Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MutatingWebhookConfiguration Table 4.11. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MutatingWebhookConfiguration Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MutatingWebhookConfiguration Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. Body parameters Parameter Type Description body MutatingWebhookConfiguration schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 401 - Unauthorized Empty 4.2.4. /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations/{name} Table 4.17. Global path parameters Parameter Type Description name string name of the MutatingWebhookConfiguration HTTP method GET Description watch changes to an object of kind MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/extension_apis/mutatingwebhookconfiguration-admissionregistration-k8s-io-v1
Chapter 2. Configuring SSO for Argo CD using Dex
Chapter 2. Configuring SSO for Argo CD using Dex After the Red Hat OpenShift GitOps Operator is installed, Argo CD automatically creates a user with admin permissions. To manage multiple users, cluster administrators can use Argo CD to configure Single Sign-On (SSO). Note The spec.dex parameter in the ArgoCD CR is no longer supported from Red Hat OpenShift GitOps v1.10.0 onwards. Consider using the .spec.sso parameter instead. 2.1. Configuration to enable the Dex OpenShift OAuth Connector Dex is installed by default for all the Argo CD instances created by the Operator. You can configure Red Hat OpenShift GitOps to use Dex as the SSO authentication provider by setting the .spec.sso parameter. Dex uses the users and groups defined within OpenShift Container Platform by checking the OAuth server provided by the platform. Procedure To enable Dex, set the .spec.sso.provider parameter to dex in the YAML resource of the Operator: # ... spec: sso: provider: dex dex: openShiftOAuth: true 1 # ... 1 The openShiftOAuth property triggers the Operator to automatically configure the built-in OpenShift Container Platform OAuth server when the value is set to true . 2.1.1. Mapping users to specific roles Argo CD cannot map users to specific roles if they have a direct ClusterRoleBinding role. You can manually change the role as role:admin on SSO through OpenShift. Procedure Create a group named cluster-admins . USD oc adm groups new cluster-admins Add the user to the group. USD oc adm groups add-users cluster-admins USER Apply the cluster-admin ClusterRole to the group: USD oc adm policy add-cluster-role-to-group cluster-admin cluster-admins 2.2. Disabling Dex by replacing .spec.sso To disable dex, either remove the spec.sso element from the Argo CD custom resource or specify a different SSO provider.
[ "spec: sso: provider: dex dex: openShiftOAuth: true 1", "oc adm groups new cluster-admins", "oc adm groups add-users cluster-admins USER", "oc adm policy add-cluster-role-to-group cluster-admin cluster-admins" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/access_control_and_user_management/configuring-sso-for-argo-cd-using-dex
Appendix A. Health messages for the Ceph File System
Appendix A. Health messages for the Ceph File System Cluster health checks The Ceph Monitor daemons generate health messages in response to certain states of the Metadata Server (MDS). Below is the list of the health messages and their explanation: mds rank(s) <ranks> have failed One or more MDS ranks are not currently assigned to any MDS daemon. The storage cluster will not recover until a suitable replacement daemon starts. mds rank(s) <ranks> are damaged One or more MDS ranks has encountered severe damage to its stored metadata, and cannot start again until the metadata is repaired. mds cluster is degraded One or more MDS ranks are not currently up and running, clients might pause metadata I/O until this situation is resolved. This includes ranks being failed or damaged, and includes ranks which are running on an MDS but are not in the active state yet - for example, ranks in the replay state. mds <names> are laggy The MDS daemons are supposed to send beacon messages to the monitor in an interval specified by the mds_beacon_interval option, the default is 4 seconds. If an MDS daemon fails to send a message within the time specified by the mds_beacon_grace option, the default is 15 seconds. The Ceph Monitor marks the MDS daemon as laggy and automatically replaces it with a standby daemon if any is available. Daemon-reported health checks The MDS daemons can identify a variety of unwanted conditions, and return them in the output of the ceph status command. These conditions have human readable messages, and also have a unique code starting MDS_HEALTH , which appears in JSON output. Below is the list of the daemon messages, their codes, and explanation. "Behind on trimming... " Code: MDS_HEALTH_TRIM CephFS maintains a metadata journal that is divided into log segments. The length of journal (in number of segments) is controlled by the mds_log_max_segments setting. When the number of segments exceeds that setting, the MDS starts writing back metadata so that it can remove (trim) the oldest segments. If this process is too slow, or a software bug is preventing trimming, then this health message appears. The threshold for this message to appear is for the number of segments to be double mds_log_max_segments . Note Increasing mds_log_max_segments is recommended if the trim warning is encountered. However, ensure that this configuration is reset back to its default when the cluster health recovers and the trim warning is seen no more. It is recommended to set mds_log_max_segments to 256 to allow the MDS to catch up with trimming. "Client <name> failing to respond to capability release" Code: MDS_HEALTH_CLIENT_LATE_RELEASE, MDS_HEALTH_CLIENT_LATE_RELEASE_MANY CephFS clients are issued capabilities by the MDS. The capabilities work like locks. Sometimes, for example, when another client needs access, the MDS requests clients to release their capabilities. If the client is unresponsive, it might fail to do so promptly, or fail to do so at all. This message appears if a client has taken a longer time to comply than the time specified by the mds_revoke_cap_timeout option (default is 60 seconds). "Client <name> failing to respond to cache pressure" Code: MDS_HEALTH_CLIENT_RECALL, MDS_HEALTH_CLIENT_RECALL_MANY Clients maintain a metadata cache. Items, such as inodes, in the client cache, are also pinned in the MDS cache. When the MDS needs to shrink its cache to stay within its own cache size limits, the MDS sends messages to clients to shrink their caches too. If a client is unresponsive, it can prevent the MDS from properly staying within its cache size, and the MDS might eventually run out of memory and terminate unexpectedly. This message appears if a client has taken more time to comply than the time specified by the mds_recall_state_timeout option (default is 60 seconds). See Metadata Server cache size limits section for details. "Client name failing to advance its oldest client/flush tid" Code: MDS_HEALTH_CLIENT_OLDEST_TID, MDS_HEALTH_CLIENT_OLDEST_TID_MANY The CephFS protocol for communicating between clients and MDS servers uses a field called oldest tid to inform the MDS of which client requests are fully complete so that the MDS can forget about them. If an unresponsive client is failing to advance this field, the MDS might be prevented from properly cleaning up resources used by client requests. This message appears if a client has more requests than the number specified by the max_completed_requests option (default is 100000) that are complete on the MDS side but have not yet been accounted for in the client's oldest tid value. "Metadata damage detected" Code: MDS_HEALTH_DAMAGE Corrupt or missing metadata was encountered when reading from the metadata pool. This message indicates that the damage was sufficiently isolated for the MDS to continue operating, although client accesses to the damaged subtree return I/O errors. Use the damage ls administration socket command to view details on the damage. This message appears as soon as any damage is encountered. "MDS in read-only mode" Code: MDS_HEALTH_READ_ONLY The MDS has entered into read-only mode and will return the EROFS error codes to client operations that attempt to modify any metadata. The MDS enters into read-only mode: If it encounters a write error while writing to the metadata pool. If the administrator forces the MDS to enter into read-only mode by using the force_readonly administration socket command. "<N> slow requests are blocked" Code: MDS_HEALTH_SLOW_REQUEST One or more client requests have not been completed promptly, indicating that the MDS is either running very slowly, or encountering a bug. Use the ops administration socket command to list outstanding metadata operations. This message appears if any client requests have taken more time than the value specified by the mds_op_complaint_time option (default is 30 seconds). "Too many inodes in cache" Code: MDS_HEALTH_CACHE_OVERSIZED The MDS has failed to trim its cache to comply with the limit set by the administrator. If the MDS cache becomes too large, the daemon might exhaust available memory and terminate unexpectedly. By default, this message appears if the MDS cache size is 50% greater than its limit. Additional Resources See the Metadata Server cache size limits section in the Red Hat Ceph Storage File System Guide for details.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/file_system_guide/health-messages-for-the-ceph-file-system_fs
Chapter 2. Content Management Types Overview
Chapter 2. Content Management Types Overview With Red Hat Satellite, you can manage the following content types: RPM Packages Import RPM packages from repositories related to your Red Hat subscriptions. Satellite Server downloads the RPM files from Red Hat's Content Delivery Network and stores them locally. You can use these repositories and their RPM files in Content Views. Kickstart Trees Import the kickstart trees for creating a system. New systems access these kickstart trees over a network to use as base content for their installation. Red Hat Satellite also contains some predefined kickstart templates as well as the ability to create your own, which are used to provision systems and customize the installation. You can also manage other types of custom content in Satellite. For example: ISO and KVM Images Download and manage media for installation and provisioning. For example, Satellite downloads, stores, and manages ISO images and guest images for specific Red Hat Enterprise Linux and non-Red Hat operating systems. You can use the procedure to add custom content for any type of content you require, for example, SSL certificates and OVAL files.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/content_management_types_overview_content-management
Using IdM Healthcheck to monitor your IdM environment
Using IdM Healthcheck to monitor your IdM environment Red Hat Enterprise Linux 9 Performing status and health checks Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_idm_healthcheck_to_monitor_your_idm_environment/index
Chapter 1. Support policy for Eclipse Temurin
Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.392_release_notes/openjdk8-temurin-support-policy
probe::tty.register
probe::tty.register Name probe::tty.register - Called when a tty device is registred Synopsis Values driver_name the driver name name the driver .dev_name name index the tty index requested module the module name
[ "tty.register" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-tty-register
1.5.2. Firewall Marks
1.5.2. Firewall Marks Firewall marks are an easy and efficient way to a group ports used for a protocol or group of related protocols. For instance, if LVS is deployed to run an e-commerce site, firewall marks can be used to bundle HTTP connections on port 80 and secure, HTTPS connections on port 443. By assigning the same firewall mark to the virtual server for each protocol, state information for the transaction can be preserved because the LVS router forwards all requests to the same real server after a connection is opened. Because of its efficiency and ease-of-use, administrators of LVS should use firewall marks instead of persistence whenever possible for grouping connections. However, administrators should still add persistence to the virtual servers in conjunction with firewall marks to ensure the clients are reconnected to the same server for an adequate period of time.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s2-lve-fwmarks-vsa
Chapter 2. Getting started using the compliance service
Chapter 2. Getting started using the compliance service This section describes how to configure your RHEL systems to report compliance data to the Insights for RHEL application. This installs necessary additional components such as the SCAP Security Guide (SSG), which is used to perform the compliance scan. Prerequisites The Insights client is deployed on the system. You must have root privileges on the system. Procedure Check the version of RHEL on the system: Review the Insights Compliance - Supported configurations article and make note of the supported SSG version for the RHEL minor version on the system. Note Some minor versions of RHEL support more than one version of SSG. The Insights compliance service will always show results for the latest supported version. Check if the supported version of the SSG package is installed on the system: Example - for RHEL 8.4 run: If it is not already installed, install the supported version of SSG on the system. Example - for RHEL 8.4 run: Assign systems to policies using the Insights compliance service UI, or using insights-client commands in the CLI: Use the compliance service UI to navigate to Security > Compliance > SCAP policies and use one of the following methods to add systems: Creating new SCAP policies Editing included systems You can also add systems by using the following insights-client commands on the CLI: insights-client --compliance-policies to list available policies and their associated ID insights-client --compliance-assign <ID> For more information about using insights-client commands to add systems, see Managing SCAP security policies in the Insights for RHEL compliance service in Assessing and Monitoring Security Policy Compliance of RHEL Systems . Options for the Insights client in Client Configuration Guide for Red Hat Insights . After adding each system to the needed security policy, return to the system and run the compliance scan using: Note The scan can take 1-5 minutes to complete. Navigate to Security > Compliance > Reports to view results. Optional: Schedule the compliance jobs to run with cron . Additional Resources To learn which versions of the SCAP Security Guide are supported for Red Hat Enterprise Linux minor versions, see Insights Compliance - Supported configurations . 2.1. Setting up recurring scans for Insights services To get the most accurate recommendations from Red Hat Insights services such as compliance and malware detection, you might need to manually scan and upload data collection reports to the services on a regular schedule. Use the following insights-client commands to run the commands manually: Currently, Insights does not have an automated scheduler to perform the scans for you, but you can configure a cron job to schedule automatic scans. Important Before you create a cron job, make sure that the commands work properly when you run them manually. Prerequisites The services you want to use (Compliance and Malware Detection) are configured and running on your system. Procedure At the system prompt, issue the crontab -e command to edit the crontab file. This command opens your default text editor. Add a crontab entry for the service you want to run. For example: In this example, the first command uploads a Compliance report to Insights every day at 20:10 local time. The second command uploads a malware detection report to Insights every day at 21:10 local time. Save the file and exit the text editor.
[ "[user@insights]USD cat /etc/redhat-release", "dnf info scap-security-guide-0.1.57-3.el8_4", "dnf install scap-security-guide-0.1.57-3.el8_4", "insights-client --compliance", "insights-client --compliance insights-client --collector malware-detection", "crontab -e", "10 20 * * * /bin/insights-client --compliance 10 21 * * * /bin/insights-client --collector malware-detection" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_security_policy_compliance_of_rhel_systems/compliance-getting-started_intro-compliance
7.189. sblim-sfcb
7.189. sblim-sfcb 7.189.1. RHBA-2015:1432 - sblim-sfcb bug fix update Updated sblim-sfcb packages that fix several bugs are now available for Red Hat Enterprise Linux 6. Small Footprint CIM Broker (sblim-sfcb) is a Common Information Model (CIM) server conforming to the CIM Operations over the HTTP protocol. The SFCB CIM server is robust and resource-efficient, and is therefore particularly-suited for embedded and resource-constrained environments. The sblim-sfcb package supports providers written against the Common Manageability Programming Interface (CMPI). Bug Fixes BZ# 1102477 Due to incorrect buffer handling in the sblim-sfcb server, the wbemcli CIM client returned an error message when trying to connect to sblim-sfcb over the HTTPS protocol. A patch has been provided to fix this bug, and sblim-sfcb is now reachable over HTTPS without any errors. BZ# 1110106 When a sblim-sfcb server was used in combination with Openwsman and the openwsmand service connected locally to the sblim-sfcb server, a defunct process was left behind. As a consequence, a new process could not be created by the system. With this update, Openwsman defunct processes no longer occur after terminating the connection to the sblim-sfcb server. BZ# 1114798 Due to a memory leak in the sblim-sfcb server, the amount of memory consumed by the sfcbd service process was increased. The underlying source code has been modified to fix this bug, and the sfcbd service process no longer causes an unwanted memory consumption increase. Users of sblim-sfcb are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-sblim-sfcb
Chapter 2. Configuring a private cluster
Chapter 2. Configuring a private cluster After you install an OpenShift Container Platform version 4.11 cluster, you can set some of its core components to be private. 2.1. About private clusters By default, OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your private cluster. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. DNS If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster's own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for *.apps , for the Ingress object, and api , for the API server. The *.apps records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all DNS resolution for the cluster. Ingress Controller Because the default Ingress object is created as public, the load balancer is internet-facing and in the public subnets. The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters. The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure. API server By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic. On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster's access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz path. On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer. On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster. 2.2. Setting DNS to private After you deploy a cluster, you can modify its DNS to use only a private zone. Procedure Review the DNS custom resource for your cluster: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {} Note that the spec section contains both a private and a public zone. Patch the DNS custom resource to remove the public zone: USD oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patched Because the Ingress Controller consults the DNS definition when it creates Ingress objects, when you create or modify Ingress objects, only private records are created. Important DNS records for the existing Ingress objects are not modified when you remove the public zone. Optional: Review the DNS custom resource for your cluster and confirm that the public zone was removed: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {} 2.3. Setting the Ingress Controller to private After you deploy a cluster, you can modify its Ingress Controller to use only a private zone. Procedure Modify the default Ingress Controller to use only an internal endpoint: USD oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF Example output ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replaced The public DNS entry is removed, and the private zone entry is updated. 2.4. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for AWS or Azure, take the following actions: Locate and delete appropriate load balancer component. For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer. For Azure, delete the api-internal rule for the load balancer. Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers: Important You can run the following steps only for an installer-provisioned infrastructure (IPI) cluster. For a user-provisioned infrastructure (UPI) cluster, you must manually remove or disable the external load balancers. From your terminal, list the cluster machines: USD oc get machine -n openshift-machine-api Example output NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m You modify the control plane machines, which contain master in the name, in the following step. Remove the external load balancer from each control plane machine. Edit a control plane Machine object to remove the reference to the external load balancer: USD oc edit machines -n openshift-machine-api <master_name> 1 1 Specify the name of the control plane, or master, Machine object to modify. Remove the lines that describe the external load balancer, which are marked in the following example, and save and exit the object specification: ... spec: providerSpec: value: ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network 1 2 Delete this line. Repeat this process for each of the machines that contains master in the name. 2.4.1. Configuring the Ingress Controller endpoint publishing scope to Internal When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope set to External . Cluster administrators can change an External scoped Ingress Controller to Internal . Prerequisites You installed the oc CLI. Procedure To change an External scoped Ingress Controller to Internal , enter the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}' To check the status of the Ingress Controller, enter the following command: USD oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml The Progressing status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command: USD oc -n openshift-ingress delete services/router-default If you delete the service, the Ingress Operator recreates it as Internal .
[ "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF", "ingresscontroller.operator.openshift.io \"default\" deleted ingresscontroller.operator.openshift.io/default replaced", "oc get machine -n openshift-machine-api", "NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m", "oc edit machines -n openshift-machine-api <master_name> 1", "spec: providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"Internal\"}}}}'", "oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml", "oc -n openshift-ingress delete services/router-default" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/post-installation_configuration/configuring-private-cluster
Chapter 16. Managing container images by using the RHEL web console
Chapter 16. Managing container images by using the RHEL web console You can use the RHEL web console web-based interface to pull, prune, or delete your container images. 16.1. Pulling container images in the web console You can download container images to your local system and use them to create your containers. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Images table, click the overflow menu in the upper-right corner and select Download new image . The Search for an image dialog box appears. In the Search for field, enter the name of the image or specify its description. In the in drop-down list, select the registry from which you want to pull the image. Optional: In the Tag field, enter the tag of the image. Click Download . Verification Click Podman containers in the main menu. You can see the newly downloaded image in the Images table. Note You can create a container from the downloaded image by clicking the Create container in the Images table. To create the container, follow steps 3-8 in Creating containers in the web console . 16.2. Pruning container images in the web console You can remove all unused images that do not have any containers based on it. Prerequisites At least one container image is pulled. You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Images table, click the overflow menu in the upper-right corner and select Prune unused images . The pop-up window with the list of images appears. Click Prune to confirm your choice. Verification Click Podman containers in the main menu. The deleted images should not be listed in the Images table. 16.3. Deleting container images in the web console You can delete a previously pulled container image using the web console. Prerequisites At least one container image is pulled. You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Images table, select the image you want to delete and click the overflow menu and select Delete . The window appears. Click Delete tagged images to confirm your choice. Verification Click the Podman containers in the main menu. The deleted container should not be listed in the Images table.
[ "dnf install cockpit-podman", "dnf install cockpit-podman", "dnf install cockpit-podman" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/building_running_and_managing_containers/managing-container-images-by-using-the-rhel-web-console_building-running-and-managing-containers
Chapter 6. UserOAuthAccessToken [oauth.openshift.io/v1]
Chapter 6. UserOAuthAccessToken [oauth.openshift.io/v1] Description UserOAuthAccessToken is a virtual resource to mirror OAuthAccessTokens to the user the access token was issued for Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources authorizeToken string AuthorizeToken contains the token that authorized this token clientName string ClientName references the client that created this token. expiresIn integer ExpiresIn is the seconds from CreationTime before this token expires. inactivityTimeoutSeconds integer InactivityTimeoutSeconds is the value in seconds, from the CreationTimestamp, after which this token can no longer be used. The value is automatically incremented when the token is used. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta redirectURI string RedirectURI is the redirection associated with the token. refreshToken string RefreshToken is the value by which this token can be renewed. Can be blank. scopes array (string) Scopes is an array of the requested scopes. userName string UserName is the user name associated with this token userUID string UserUID is the unique UID associated with this token 6.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/useroauthaccesstokens GET : list or watch objects of kind UserOAuthAccessToken /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens GET : watch individual changes to a list of UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/useroauthaccesstokens/{name} DELETE : delete an UserOAuthAccessToken GET : read the specified UserOAuthAccessToken /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens/{name} GET : watch changes to an object of kind UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/oauth.openshift.io/v1/useroauthaccesstokens Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind UserOAuthAccessToken Table 6.2. HTTP responses HTTP code Reponse body 200 - OK UserOAuthAccessTokenList schema 401 - Unauthorized Empty 6.2.2. /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens Table 6.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. Table 6.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/oauth.openshift.io/v1/useroauthaccesstokens/{name} Table 6.5. Global path parameters Parameter Type Description name string name of the UserOAuthAccessToken Table 6.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an UserOAuthAccessToken Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.8. Body parameters Parameter Type Description body DeleteOptions schema Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified UserOAuthAccessToken Table 6.10. HTTP responses HTTP code Reponse body 200 - OK UserOAuthAccessToken schema 401 - Unauthorized Empty 6.2.4. /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens/{name} Table 6.11. Global path parameters Parameter Type Description name string name of the UserOAuthAccessToken Table 6.12. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.13. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/oauth_apis/useroauthaccesstoken-oauth-openshift-io-v1
18.4. Configuring a VNC Server
18.4. Configuring a VNC Server To set up graphical desktop sharing between the host and the guest machine using Virtual Network Computing (VNC), a VNC server has to be configured on the guest you wish to connect to. To do this, VNC has to be specified as a graphics type in the devices element of the guest's XML file. For further information, see Section 23.17.11, "Graphical Framebuffers" . To connect to a VNC server, use the virt-viewer utility or the virt-manager interface.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-remote_management_of_guests-configuring_a_vnc_server
Preface
Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/pr01
Getting started with resource optimization for OpenShift
Getting started with resource optimization for OpenShift Cost Management Service 1-latest Learn about resource optimization for OpenShift Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/getting_started_with_resource_optimization_for_openshift/index
4.19. RHEA-2012:0853 - new packages: usbredir
4.19. RHEA-2012:0853 - new packages: usbredir New usbredir packages are now available for Red Hat Enterprise Linux 6. The usbredir packages provide a protocol for redirection of USB traffic from a single USB device to a different virtual machine then the one to which the USB device is attached. The usbredir package contains a number of libraries to help implement support for usbredir. This enhancement update adds the usbredir package to Red Hat Enterprise Linux 6. (BZ# 758098 ) Users who wish to use the new USB redirection for Spice are advised to install these new packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rhea-2012-0853
Virtualization
Virtualization Red Hat OpenShift Service on AWS 4 OpenShift Virtualization installation and usage. Red Hat OpenShift Documentation Team
[ "oc get scc kubevirt-controller -o yaml", "oc get clusterrole kubevirt-controller -o yaml", "tar -xvf <virtctl-version-distribution.arch>.tar.gz", "chmod +x <path/virtctl-file-name>", "echo USDPATH", "export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig", "C:\\> path", "echo USDPATH", "subscription-manager repos --enable cnv-4.18-for-rhel-8-x86_64-rpms", "yum install kubevirt-virtctl", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"add\", \"path\": \"/spec/featureGates\", \"value\": \"HotplugVolumes\"}]'", "virtctl vmexport download <vmexport_name> --vm|pvc=<object_name> --volume=<volume_name> --output=<output_file>", "virtctl guestfs -n <namespace> <pvc_name> 1", "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "Memory overhead per infrastructure node ~ 150 MiB", "Memory overhead per worker node ~ 360 MiB", "Memory overhead per virtual machine ~ (1.002 x requested memory) + 218 MiB \\ 1 + 8 MiB x (number of vCPUs) \\ 2 + 16 MiB x (number of graphics devices) \\ 3 + (additional memory overhead) 4", "CPU overhead for infrastructure nodes ~ 4 cores", "CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine", "Aggregated storage overhead per node ~ 10 GiB", "oc apply -f <file name>.yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:", "oc apply -f <file_name>.yaml", "watch oc get csv -n openshift-cnv", "NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.18.0 OpenShift Virtualization 4.18.0 Succeeded", "oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv", "oc delete subscription kubevirt-hyperconverged -n openshift-cnv", "oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "oc delete namespace openshift-cnv", "oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "customresourcedefinition.apiextensions.k8s.io \"cdis.cdi.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hostpathprovisioners.hostpathprovisioner.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hyperconvergeds.hco.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"kubevirts.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"ssps.ssp.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"tektontasks.tektontasks.kubevirt.io\" deleted (dry run)", "oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "oc edit <resource_type> <resource_name> -n {CNVNamespace}", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value 1 workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value 2", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value 1 workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key 2 operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8 3", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: 1 - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value 1", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3", "oc create -f storageclass_csi.yaml", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3", "certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s", "error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration", "oc get csv -n openshift-cnv", "VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'", "ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5", "oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces", "apiVersion: instancetype.kubevirt.io/v1beta1 kind: VirtualMachineInstancetype metadata: name: example-instancetype spec: cpu: guest: 1 1 memory: guest: 128Mi 2", "virtctl create instancetype --cpu 2 --memory 256Mi", "virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -", "virtctl create vm --instancetype <my_instancetype> --preference <my_preference>", "virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference>", "virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference", "virtctl create vm --volume-import=type:pvc,src:my-ns/my-pvc-a,name:volume-a --volume-import=type:pvc,src:my-ns/my-pvc-b,name:volume-b --infer-instancetype-from volume-b --infer-preference-from volume-b", "oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: commonBootImageNamespace: <custom_namespace> 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null name: vm-rhel-datavolume 1 labels: kubevirt.io/vm: vm-rhel-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel-dv 2 spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 10Gi 3 instancetype: name: u1.small 4 preference: inferFromVolume: datavolumedisk1 runStrategy: Always template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-rhel-datavolume spec: domain: devices: {} resources: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: rhel-dv name: datavolumedisk1 status: {}", "oc create -f vm-rhel-datavolume.yaml", "oc get pods", "oc describe dv rhel-dv 1", "virtctl console vm-rhel-datavolume", "virtctl stop <my_vm_name>", "oc get vm <my_vm_name> -o jsonpath=\"{.spec.template.spec.volumes}{'\\n'}\"", "[{\"dataVolume\":{\"name\":\"<my_vm_volume>\"},\"name\":\"rootdisk\"},{\"cloudInitNoCloud\":{...}]", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE <my_vm_volume> Bound ...", "virtctl guestfs <my-vm-volume> --uid 107", "virt-sysprep -a disk.img", "%WINDIR%\\System32\\Sysprep\\sysprep.exe /generalize /shutdown /oobe /mode:vm", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-9-minimal spec: dataVolumeTemplates: - metadata: name: rhel-9-minimal-volume spec: sourceRef: kind: DataSource name: rhel9 1 namespace: openshift-virtualization-os-images 2 storage: {} instancetype: name: u1.medium 3 preference: name: rhel.9 4 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: rhel-9-minimal-volume name: rootdisk", "oc create -f <vm_manifest_file>.yaml", "virtctl start <vm_name> -n <namespace>", "cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF", "podman build -t <registry>/<container_disk_name>:latest .", "podman push <registry>/<container_disk_name>:latest", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null name: vm-rhel-datavolume 1 labels: kubevirt.io/vm: vm-rhel-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel-dv 2 spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 10Gi 3 instancetype: name: u1.small 4 preference: inferFromVolume: datavolumedisk1 runStrategy: Always template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-rhel-datavolume spec: domain: devices: {} resources: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: rhel-dv name: datavolumedisk1 status: {}", "oc create -f vm-rhel-datavolume.yaml", "oc get pods", "oc describe dv rhel-dv 1", "virtctl console vm-rhel-datavolume", "apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible cdi.kubevirt.io/clonePhase: Succeeded cdi.kubevirt.io/cloneType: copy", "NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible", "kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com", "kind: StorageClass apiVersion: storage.k8s.io/v1 provisioner: openshift-storage.rbd.csi.ceph.com", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: \"<source_namespace>\" 2 name: \"<my_vm_disk>\" 3 storage: {}", "oc create -f <datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: runStrategy: Halted template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: \"<source_pvc>\" 3", "oc create -f <vm-clone-datavolumetemplate>.yaml", "yum install -y qemu-guest-agent", "systemctl enable --now qemu-guest-agent", "oc get vm <vm_name>", "net start", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "virtctl start <vm> -n <namespace>", "oc apply -f <vm.yaml>", "virtctl vnc <vm_name>", "virtctl vnc <vm_name> -v 4", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/deployVmConsoleProxy\", \"value\": true}]'", "curl --header \"Authorization: Bearer USD{TOKEN}\" \"https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>\"", "{ \"token\": \"eyJhb...\" }", "export VNC_TOKEN=\"<token>\"", "oc login --token USD{VNC_TOKEN}", "virtctl vnc <vm_name> -n <namespace>", "virtctl delete serviceaccount --namespace \"<namespace>\" \"<vm_name>-vnc-access\"", "kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --user=\"USD{USER_NAME}\"", "kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --serviceaccount=\"USD{SERVICE_ACCOUNT_NAME}\"", "virtctl console <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys", "virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1", "virtctl -n my-namespace ssh cloud-user@example-vm -i my-key", "Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p", "ssh <user>@vm/<vm_name>.<namespace>", "virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1", "virtctl expose vm example-vm --name example-service --type NodePort --port 22", "oc get service", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Halted template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "ssh <user_name>@<ip_address> -p <port> 1", "oc describe vm <vm_name> -n <namespace>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default", "ssh <user_name>@<ip_address> -i <ssh_key>", "ssh [email protected] -i ~/.ssh/id_rsa_cloud-user", "oc edit vm <vm_name>", "oc apply vm <vm_name> -n <namespace>", "oc edit vm <vm_name> -n <namespace>", "disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default", "oc delete vm <vm_name>", "apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3", "oc create -f example-export.yaml", "oc get vmexport example-export -o yaml", "apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: \"\" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:10:09Z\" reason: podReady status: \"True\" type: Ready - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:09:02Z\" reason: pvcBound status: \"True\" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export", "oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1", "oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1", "oc get vmexport <export_name> -o yaml", "apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: # links: external: # manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: # manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export", "curl --cacert cacert.crt <secret_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "curl --cacert cacert.crt <all_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "oc get vmis -A", "oc delete vmi <vmi_name>", "kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name>", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2", "apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: windows11-installer-run- labels: pipelinerun: windows11-installer-run spec: params: - name: winImageDownloadURL value: <windows_image_download_url> 1 - name: acceptEula value: false 2 pipelineRef: params: - name: catalog value: redhat-pipelines - name: type value: artifact - name: kind value: pipeline - name: name value: windows-efi-installer - name: version value: 4 resolver: hub taskRunSpecs: - pipelineTaskName: modify-windows-iso-file PodTemplate: securityContext: fsGroup: 107 runAsUser: 107", "oc apply -f windows11-customize-run.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: runStrategy: Halted template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1", "metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2", "metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname", "metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value", "metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: \"EPYC\"", "apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2", "oc create -f <file_name>.yaml", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/featureGates/VMPersistentState\", \"value\": true}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm spec: template: spec: domain: firmware: bootloader: efi: persistent: true", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", 2 \"type\": \"bridge\", 3 \"bridge\": \"bridge-interface\", 4 \"macspoofchk\": false, 5 \"vlan\": 100, 6 \"disableContainerInterface\": true, \"preserveDefaultVlan\": false 7 }", "oc create -f pxe-net-conf.yaml", "interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1", "devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2", "networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf", "oc create -f vmi-pxe-boot.yaml", "virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created", "oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running", "virtctl vnc vmi-pxe-boot", "virtctl console vmi-pxe-boot", "ip addr", "3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1", "apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: runStrategy: Always template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio", "oc get pods", "NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m", "oc describe pod virt-launcher-vm-fedora-dpc87", "[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...]", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/tuningPolicy\", \"value\": \"highBurst\"}]'", "oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged -n openshift-cnv -o go-template --template='{{range USDconfig, USDvalue := .spec.configuration}} {{if eq USDconfig \"apiConfiguration\" \"webhookConfiguration\" \"controllerConfiguration\" \"handlerConfiguration\"}} {{\"\\n\"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{\"\\n\"}}", "virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]", "virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>", "oc edit pvc <pvc_name>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: \"<storage_class>\" 2", "oc create -f <blank-image-datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}", "oc create -f <vm-name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4", "oc create -f example-vm-ipv6.yaml", "oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"", "apiVersion: v1 kind: Namespace metadata: name: udn_namespace labels: k8s.ovn.org/primary-user-defined-network: \"\" 1", "apply -f <filename>.yaml", "apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-l2-net 1 namespace: my-namespace 2 spec: topology: Layer2 3 layer2: role: Primary 4 subnets: - \"10.0.0.0/24\" - \"2001:db8::/60\" ipam: lifecycle: Persistent 5", "oc apply -f --validate=true <filename>.yaml", "kind: ClusterUserDefinedNetwork metadata: name: cudn-l2-net 1 spec: namespaceSelector: 2 matchExpressions: 3 - key: kubernetes.io/metadata.name operator: In 4 values: [\"red-namespace\", \"blue-namespace\"] network: topology: Layer2 5 layer2: role: Primary 6 ipam: lifecycle: Persistent subnets: - 203.203.0.0/16", "oc apply -f --validate=true <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: my-namespace 1 spec: template: spec: domain: devices: interfaces: - name: udn-l2-net 2 binding: name: l2bridge 3 networks: - name: udn-l2-net 4 pod: {}", "oc apply -f <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Halted template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |- { \"cniVersion\": \"0.3.1\", 1 \"name\": \"my-namespace-l2-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\":\"layer2\", 4 \"mtu\": 1300, 5 \"netAttachDefName\": \"my-namespace/l2-network\" 6 }", "oc apply -f <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: runStrategy: Always template: spec: domain: devices: interfaces: - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: secondary 2 multus: networkName: <nad_name> 3 nodeSelector: node-role.kubernetes.io/worker: '' 4", "oc apply -f <filename>.yaml", "virtctl start <vm_name> -n <namespace>", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # new interface - name: <secondary_nic> 1 bridge: {} networks: - name: defaultnetwork pod: {} # new network - name: <secondary_nic> 2 multus: networkName: <nad_name> 3", "virtctl migrate <vm_name>", "oc get VirtualMachineInstanceMigration -w", "NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora", "oc get vmi vm-fedora -ojsonpath=\"{ @.status.interfaces }\"", "[ { \"infoSource\": \"domain, guest-agent\", \"interfaceName\": \"eth0\", \"ipAddress\": \"10.130.0.195\", \"ipAddresses\": [ \"10.130.0.195\", \"fd02:0:0:3::43c\" ], \"mac\": \"52:54:00:0e:ab:25\", \"name\": \"default\", \"queueCount\": 1 }, { \"infoSource\": \"domain, guest-agent, multus-status\", \"interfaceName\": \"eth1\", \"mac\": \"02:d8:b8:00:00:2a\", \"name\": \"bridge-interface\", 1 \"queueCount\": 1 } ]", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # set the interface state to absent - name: <secondary_nic> state: absent 1 bridge: {} networks: - name: defaultnetwork pod: {} - name: <secondary_nic> multus: networkName: <nad_name>", "virtctl migrate <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: \"true\" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk", "oc apply -f <vm_name>.yaml 1", "apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP", "oc create -f <service_name>.yaml 1", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2", "oc describe vmi <vmi_name>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-", "oc edit storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>", "oc get storageprofile", "oc describe storageprofile <name>", "Name: ocs-storagecluster-ceph-rbd-virtualization Namespace: Labels: app=containerized-data-importer app.kubernetes.io/component=storage app.kubernetes.io/managed-by=cdi-controller app.kubernetes.io/part-of=hyperconverged-cluster app.kubernetes.io/version=4.17.2 cdi.kubevirt.io= Annotations: <none> API Version: cdi.kubevirt.io/v1beta1 Kind: StorageProfile Metadata: Creation Timestamp: 2023-11-13T07:58:02Z Generation: 2 Owner References: API Version: cdi.kubevirt.io/v1beta1 Block Owner Deletion: true Controller: true Kind: CDI Name: cdi-kubevirt-hyperconverged UID: 2d6f169a-382c-4caf-b614-a640f2ef8abb Resource Version: 4186799537 UID: 14aef804-6688-4f2e-986b-0297fd3aaa68 Spec: Status: Claim Property Sets: 1 accessModes: ReadWriteMany volumeMode: Block accessModes: ReadWriteOnce volumeMode: Block accessModes: ReadWriteOnce volumeMode: Filesystem Clone Strategy: csi-clone 2 Data Import Cron Source Format: snapshot 3 Provisioner: openshift-storage.rbd.csi.ceph.com Snapshot Class: ocs-storagecluster-rbdplugin-snapclass Storage Class: ocs-storagecluster-ceph-rbd-virtualization Events: <none>", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": false}]'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": true}]'", "oc get sc -o json| jq '.items[].metadata|select(.annotations.\"storageclass.kubevirt.io/is-default-virt-class\"==\"true\")|.name'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubevirt.io/is-default-virt-class\": \"false\"}}}'", "oc get sc -o json| jq '.items[].metadata|select(.annotations.\"storageclass.kubernetes.io/is-default-class\"==\"true\")|.name'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubevirt.io/is-default-virt-class\": \"true\"}}}'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel9-image-cron spec: template: spec: storage: storageClassName: <storage_class> 1 schedule: \"0 */12 * * *\" 2 managedDataSource: <data_source> 3", "For the custom image to be detected as an available boot source, the value of the `spec.dataVolumeTemplates.spec.sourceRef.name` parameter in the VM template must match this value.", "oc delete DataVolume,VolumeSnapshot -n openshift-virtualization-os-images --selector=cdi.kubevirt.io/dataImportCron", "oc get storageprofile <storage_class_name> -o json | jq .status.dataImportCronSourceFormat", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos-stream9-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" 1 spec: schedule: \"0 */12 * * *\" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi garbageCollect: Outdated managedDataSource: centos-stream9 4", "oc edit storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: spec: dataImportCronSourceFormat: snapshot", "oc get storageprofile <storage_class> -oyaml", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: 'false' name: rhel8-image-cron", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: status: dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: centos-9-image-cron spec: garbageCollect: Outdated managedDataSource: centos-stream9 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 1 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream9 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi status: {} status: {} 2", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "spec: filesystemOverhead: global: \"<new_global_value>\" 1 storageClass: <storage_class_name>: \"<new_value_for_this_storage_class>\" 2", "oc get cdiconfig -o yaml", "oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: \"/var/myvolumes\" 2 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_cr.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3", "oc create -f storageclass_csi.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: \"/var/myvolumes\" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_pvc_template_pool.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]", "oc create -f <datavolume-cloner.yaml> 1", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io", "oc create -f <datavolume-cloner.yaml> 1", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: \"500m\" memory: \"2Gi\" requests: cpu: \"250m\" memory: \"1Gi\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: \"<storage_class>\" 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 registry: url: <image_url> 2 storage: resources: requests: storage: 1Gi preallocation: true", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: datavolume-example annotations: v1.multus-cni.io/default-network: bridge-network 1", "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 64Mi 1 completionTimeoutPerGiB: 800 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 2 4 progressTimeout: 150 5 allowPostCopy: false 6", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 0Mi 1 completionTimeoutPerGiB: 150 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 1 4 progressTimeout: 150 5 allowPostCopy: true 6", "oc edit vm <vm_name>", "apiVersion: migrations.kubevirt.io/v1alpha1 kind: VirtualMachine metadata: name: <vm_name> namespace: default labels: app: my-app environment: production spec: template: metadata: labels: kubevirt.io/domain: <vm_name> kubevirt.io/size: large kubevirt.io/environment: production", "apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: <migration_policy> spec: selectors: namespaceSelector: 1 hpc-workloads: \"True\" xyz-workloads-type: \"\" virtualMachineInstanceSelector: 2 kubevirt.io/environment: \"production\"", "oc create -f <migration_policy>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> spec: vmiName: <vm_name>", "oc create -f <migration_name>.yaml", "oc describe vmi <vm_name> -n <namespace>", "Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true", "oc delete vmim migration-job", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: evictionStrategy: LiveMigrateIfPossible 1", "virtctl restart <vm_name> -n <namespace>", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always", "\"486\" Conroe athlon core2duo coreduo kvm32 kvm64 n270 pentium pentium2 pentium3 pentiumpro phenom qemu32 qemu64", "apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc", "aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave", "aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - \"<obsolete_cpu_1>\" - \"<obsolete_cpu_2>\" minCPUModel: \"<minimum_cpu_model>\" 2", "oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1", "kubevirt_vmsnapshot_disks_restored_from_source{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0 1", "kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7", "oc create -f node-exporter-service.yaml", "wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz", "sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz --directory /usr/bin --strip 1 \"*/node_exporter\"", "[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target", "sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service", "curl http://localhost:9100/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5244e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.0449e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.7913e-05", "spec: template: metadata: labels: monitor: metrics", "oc get service -n <namespace> <node-exporter-service>", "curl http://<172.30.226.162:9100>/metrics | grep -vE \"^#|^USD\"", "node_arp_entries{device=\"eth0\"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name=\"0\",type=\"Processor\"} 0 node_cooling_device_max_state{name=\"0\",type=\"Processor\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"nice\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"user\"} 0 node_cpu_seconds_total{cpu=\"0\",mode=\"idle\"} 1.10586485e+06 node_cpu_seconds_total{cpu=\"0\",mode=\"iowait\"} 37.61 node_cpu_seconds_total{cpu=\"0\",mode=\"irq\"} 233.91 node_cpu_seconds_total{cpu=\"0\",mode=\"nice\"} 551.47 node_cpu_seconds_total{cpu=\"0\",mode=\"softirq\"} 87.3 node_cpu_seconds_total{cpu=\"0\",mode=\"steal\"} 86.12 node_cpu_seconds_total{cpu=\"0\",mode=\"system\"} 464.15 node_cpu_seconds_total{cpu=\"0\",mode=\"user\"} 1075.2 node_disk_discard_time_seconds_total{device=\"vda\"} 0 node_disk_discard_time_seconds_total{device=\"vdb\"} 0 node_disk_discarded_sectors_total{device=\"vda\"} 0 node_disk_discarded_sectors_total{device=\"vdb\"} 0 node_disk_discards_completed_total{device=\"vda\"} 0 node_disk_discards_completed_total{device=\"vdb\"} 0 node_disk_discards_merged_total{device=\"vda\"} 0 node_disk_discards_merged_total{device=\"vdb\"} 0 node_disk_info{device=\"vda\",major=\"252\",minor=\"0\"} 1 node_disk_info{device=\"vdb\",major=\"252\",minor=\"16\"} 1 node_disk_io_now{device=\"vda\"} 0 node_disk_io_now{device=\"vdb\"} 0 node_disk_io_time_seconds_total{device=\"vda\"} 174 node_disk_io_time_seconds_total{device=\"vdb\"} 0.054 node_disk_io_time_weighted_seconds_total{device=\"vda\"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device=\"vdb\"} 0.039 node_disk_read_bytes_total{device=\"vda\"} 3.71867136e+08 node_disk_read_bytes_total{device=\"vdb\"} 366592 node_disk_read_time_seconds_total{device=\"vda\"} 19.128 node_disk_read_time_seconds_total{device=\"vdb\"} 0.039 node_disk_reads_completed_total{device=\"vda\"} 5619 node_disk_reads_completed_total{device=\"vdb\"} 96 node_disk_reads_merged_total{device=\"vda\"} 5 node_disk_reads_merged_total{device=\"vdb\"} 0 node_disk_write_time_seconds_total{device=\"vda\"} 240.66400000000002 node_disk_write_time_seconds_total{device=\"vdb\"} 0 node_disk_writes_completed_total{device=\"vda\"} 71584 node_disk_writes_completed_total{device=\"vdb\"} 0 node_disk_writes_merged_total{device=\"vda\"} 19761 node_disk_writes_merged_total{device=\"vdb\"} 0 node_disk_written_bytes_total{device=\"vda\"} 2.007924224e+09 node_disk_written_bytes_total{device=\"vdb\"} 0", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics", "oc create -f node-exporter-metrics-monitor.yaml", "oc expose service -n <namespace> <node_exporter_service_name>", "oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host", "NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org", "curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5382e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.1163e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.8546e-05 go_gc_duration_seconds{quantile=\"0.75\"} 4.9139e-05 go_gc_duration_seconds{quantile=\"1\"} 0.000189423", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: runStrategy: Halted template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: \"poweroff\" 1", "oc apply -f <file_name>.yaml", "lspci | grep watchdog -i", "echo c > /proc/sysrq-trigger", "pkill -9 watchdog", "yum install watchdog", "#watchdog-device = /dev/watchdog", "systemctl enable --now watchdog.service", "oc get events -n <namespace>", "oc describe <resource> <resource_name>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 5 1 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6", "oc get pods -n openshift-cnv", "NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m", "oc logs -n openshift-cnv <pod_name>", "{\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373695Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373726Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"setting rate limiter to 5 QPS and 10 Burst\",\"pos\":\"virt-handler.go:462\",\"timestamp\":\"2022-04-17T08:58:37.373782Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]\",\"pos\":\"cpu_plugin.go:96\",\"timestamp\":\"2022-04-17T08:58:37.390221Z\"} {\"component\":\"virt-handler\",\"level\":\"warning\",\"msg\":\"host model mode is expected to contain only one model\",\"pos\":\"cpu_plugin.go:103\",\"timestamp\":\"2022-04-17T08:58:37.390263Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"node-labeller is running\",\"pos\":\"node_labeller.go:94\",\"timestamp\":\"2022-04-17T08:58:37.391011Z\"}", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: virtualMachineOptions: disableSerialConsoleLog: true 1 #", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: logSerialConsole: true 1 #", "oc apply vm <vm_name>", "virtctl restart <vm_name> -n <namespace>", "oc logs -n <namespace> -l kubevirt.io/domain=<vm_name> --tail=-1 -c guest-console-log", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"storage\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"deployment\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"network\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"compute\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"schedule\"", "{log_type=~\".+\",kubernetes_container_name=~\"<container>|<container>\"} 1 |json|kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"", "{log_type=~\".+\", kubernetes_container_name=\"compute\"}|json |!= \"custom-ga-command\" 1", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |= \"error\" != \"timeout\"", "oc describe dv <DataVolume>", "Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready", "oc get kubevirt kubevirt-hyperconverged -n openshift-cnv -o yaml", "spec: developerConfiguration: featureGates: - Snapshot", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineSnapshot metadata: name: <snapshot_name> spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name>", "oc create -f <snapshot_name>.yaml", "oc wait <vm_name> <snapshot_name> --for condition=Ready", "oc describe vmsnapshot <snapshot_name>", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4 indications: 5 - Online includedVolumes: 6 - name: rootdisk kind: PersistentVolumeClaim namespace: default - name: datadisk1 kind: DataVolume namespace: default", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineRestore metadata: name: <vm_restore> spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> virtualMachineSnapshotName: <snapshot_name>", "oc create -f <vm_restore>.yaml", "oc get vmrestore <vm_restore>", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1", "oc delete vmsnapshot <snapshot_name>", "oc get vmsnapshot", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'", "{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/virtualization/index
Chapter 22. Valgrind
Chapter 22. Valgrind Valgrind is an instrumentation framework for building dynamic analysis tools that can be used to profile applications in detail. The default installation already provides five standard tools. Valgrind tools are generally used to investigate memory management and threading problems. Valgrind provides instrumentation for user-space binaries to check for errors, such as the use of uninitialized memory, improper allocation/freeing of memory, and improper arguments for system calls. Its profiling tools can be used on most binaries; however, compared to other profilers, Valgrind profile runs are significantly slower. To profile a binary, Valgrind runs it inside a special virtual machine, which allows Valgrind to intercept all of the binary instructions. Valgrind 's tools are most useful for looking for memory-related issues in user-space programs; it is not suitable for debugging time-specific issues or kernel-space instrumentation and debugging. Valgrind reports are most useful and accurate when debuginfo packages are installed for the programs or libraries under investigation. See Section 20.1, "Enabling Debugging with Debugging Information" . 22.1. Valgrind Tools The Valgrind suite is composed of the following tools: memcheck This tool detects memory management problems in programs: By checking all reads from and writes to memory By intercepting memory manipulations like calls to malloc , free , new or delete memcheck is perhaps the most used Valgrind tool, as memory management problems can be difficult to detect using other means. Such problems often remain undetected for long periods, eventually causing crashes that are difficult to diagnose. memcheck functions as the default tool when no specific tool is selected. cachegrind cachegrind is a cache profiler that accurately pinpoints sources of cache misses in code by performing a detailed simulation of the I1, D1 and L2 caches in the CPU. It shows the number of cache misses, memory references, and instructions accruing to each line of source code; cachegrind also provides per-function, per-module, and whole-program summaries, and can even show counts for each individual machine instructions. callgrind Like cachegrind , callgrind can model cache behavior. However, the main purpose of callgrind is to record callgraphs data for the executed code. massif massif is a heap profiler; it measures how much heap memory a program uses, providing information on heap blocks, heap administration overheads, and stack sizes. Heap profilers are useful in finding ways to reduce heap memory usage. On systems that use virtual memory, programs with optimized heap memory usage are less likely to run out of memory, and may be faster as they require less paging. helgrind In programs that use the POSIX pthreads threading primitives, helgrind detects synchronization errors. Such errors are: Misuses of the POSIX pthreads API Potential deadlocks arising from lock ordering problems Data races (that is, accessing memory without adequate locking) 22.2. Using Valgrind The valgrind package and its dependencies install all the necessary tools for performing a Valgrind profile run. To profile a program with Valgrind , use: See Section 22.1, "Valgrind Tools" for a list of arguments for toolname . In addition to the suite of Valgrind tools, none is also a valid argument for toolname ; this argument allows you to run a program under Valgrind without performing any profiling. This is useful for debugging or benchmarking Valgrind itself. You can also instruct Valgrind to send all of its information to a specific file. To do so, use the option --log-file= filename . For example, to check the memory usage of the executable file hello and send profile information to output , use: See Section 22.3, "Additional information" for more information on Valgrind , along with other available documentation on the Valgrind suite of tools. 22.3. Additional information For more extensive information on Valgrind , see man valgrind . Red Hat Enterprise Linux also provides a comprehensive Valgrind Documentation book available as PDF and HTML in: /usr/share/doc/valgrind- version /valgrind_manual.pdf /usr/share/doc/valgrind- version /html/index.html
[ "valgrind --tool= toolname program", "valgrind --tool=memcheck --log-file=output hello" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/valgrind
Chapter 3. The Project Tab
Chapter 3. The Project Tab The Project tab provides an interface for viewing and managing the resources of a project. Set a project as active in Identity > Projects to view and manage resources in that project. The following options are available in the Project tab: Table 3.1. The Compute Tab Parameter Name Description Overview View reports for the project. Instances View, launch, create a snapshot from, stop, pause, or reboot instances, or connect to them through the console. Volumes Use the following tabs to complete these tasks: Volumes - View, create, edit, and delete volumes. Volume Snapshots - View, create, edit, and delete volume snapshots. Images View images, instance snapshots, and volume snapshots created by project users, and any images that are publicly available. Create, edit, and delete images, and launch instances from images and snapshots. Access & Security Use the following tabs to complete these tasks: Security Groups - View, create, edit, and delete security groups and security group rules. Key Pairs - View, create, edit, import, and delete key pairs. Floating IPs - Allocate an IP address to or release it from a project. API Access - View API endpoints, download the OpenStack RC file, download EC2 credentials, and view credentials for the logged-in project user. Table 3.2. The Network Tab Parameter Name Description Network Topology View the interactive topology of the network. Networks Create and manage public and private networks and subnets. Routers Create and manage routers. Trunks Create and manage trunks. Requires the trunk extension enabled in OpenStack Networking (neutron). Table 3.3. The Object Store Tab Parameter Name Description Containers Create and manage storage containers. A container is a storage compartment for data, and provides a way for you to organize your data. It is similar to the concept of a Linux file directory, but it cannot be nested. Table 3.4. The Orchestration Tab Parameter Name Description Stacks Orchestrate multiple composite cloud applications using templates, through both an OpenStack-native REST API and a CloudFormation-compatible Query API.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/introduction_to_the_openstack_dashboard/the_project_tab
7.1 Release Notes
7.1 Release Notes Red Hat Enterprise Linux 7 Release Notes for Red Hat Enterprise Linux 7.1 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/index
Part II. Managing projects in Business Central
Part II. Managing projects in Business Central As a process administrator, you can use Business Central in Red Hat Decision Manager to manage new, sample, and imported projects on a single or multiple branches. Prerequisites Red Hat JBoss Enterprise Application Platform 7.4 is installed. For details, see the Red Hat JBoss Enterprise Application Platform 7.4 Installation Guide . Red Hat Process Automation Manager is installed and configured with KIE Server. For more information, see Installing and configuring Red Hat Decision Manager on Red Hat JBoss EAP 7.4 . Red Hat Decision Manager is running and you can log in to Business Central with the developer role. For more information, see Planning a Red Hat Decision Manager installation .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/assembly-managing-projects
9.2. XML Representation of a Data Center
9.2. XML Representation of a Data Center Example 9.1. An XML representation of a data center
[ "<data_center href=\"/ovirt-engine/api/datacenters/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <name>Default</name> <description>The default Data Center</description> <link href=\"/ovirt-engine/api/datacenters/00000000-0000-0000-0000-000000000000/storagedomains\" rel=\"storagedomains\"/> <link href=\"/ovirt-engine/api/datacenters/00000000-0000-0000-0000-000000000000/clusters\" rel=\"clusters\"/> <link href=\"/ovirt-engine/api/datacenters/00000000-0000-0000-0000-000000000000/networks\" rel=\"networks\"/> <link href=\"/ovirt-engine/api/datacenters/00000000-0000-0000-0000-000000000000/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/datacenters/00000000-0000-0000-0000-000000000000/quotas\" rel=\"quotas\"/> <local>false</local> <storage_format>v3</storage_format> <version major=\"4\" minor=\"0\"/> <supported_versions> <version major=\"4\" minor=\"0\"/> </supported_versions> <status> <state>up</state> </status> <mac_pool href=\"/ovirt-engine/api/macpools/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"/> </data_center>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_a_data_center
32.2. How Do You Perform a Kickstart Installation?
32.2. How Do You Perform a Kickstart Installation? Kickstart installations can be performed using a local DVD, a local hard drive, or via NFS, FTP, HTTP, or HTTPS. To use kickstart, you must: Create a kickstart file. Create a boot media with the kickstart file or make the kickstart file available on the network. Make the installation tree available. Start the kickstart installation. This chapter explains these steps in detail.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-kickstart2-howuse
Chapter 4. APITokenService
Chapter 4. APITokenService 4.1. ListAllowedTokenRoles GET /v1/apitokens/generate/allowed-roles GetAllowedTokenRoles return roles that user is allowed to request for API token. 4.1.1. Description 4.1.2. Parameters 4.1.3. Return Type V1ListAllowedTokenRolesResponse 4.1.4. Content Type application/json 4.1.5. Responses Table 4.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListAllowedTokenRolesResponse 0 An unexpected error response. RuntimeError 4.1.6. Samples 4.1.7. Common object reference 4.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 4.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 4.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 4.1.7.3. V1ListAllowedTokenRolesResponse Field Name Required Nullable Type Description Format roleNames List of string 4.2. GenerateToken POST /v1/apitokens/generate GenerateToken generates API token for a given user and role. 4.2.1. Description 4.2.2. Parameters 4.2.2.1. Body Parameter Name Description Required Default Pattern body V1GenerateTokenRequest X 4.2.3. Return Type V1GenerateTokenResponse 4.2.4. Content Type application/json 4.2.5. Responses Table 4.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GenerateTokenResponse 0 An unexpected error response. RuntimeError 4.2.6. Samples 4.2.7. Common object reference 4.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 4.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 4.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 4.2.7.3. StorageTokenMetadata Field Name Required Nullable Type Description Format id String name String roles List of string issuedAt Date date-time expiration Date date-time revoked Boolean role String 4.2.7.4. V1GenerateTokenRequest Field Name Required Nullable Type Description Format name String role String roles List of string expiration Date date-time 4.2.7.5. V1GenerateTokenResponse Field Name Required Nullable Type Description Format token String metadata StorageTokenMetadata 4.3. GetAPITokens GET /v1/apitokens GetAPITokens returns all the API tokens. 4.3.1. Description 4.3.2. Parameters 4.3.2.1. Query Parameters Name Description Required Default Pattern revoked - null 4.3.3. Return Type V1GetAPITokensResponse 4.3.4. Content Type application/json 4.3.5. Responses Table 4.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetAPITokensResponse 0 An unexpected error response. RuntimeError 4.3.6. Samples 4.3.7. Common object reference 4.3.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 4.3.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 4.3.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 4.3.7.3. StorageTokenMetadata Field Name Required Nullable Type Description Format id String name String roles List of string issuedAt Date date-time expiration Date date-time revoked Boolean role String 4.3.7.4. V1GetAPITokensResponse Field Name Required Nullable Type Description Format tokens List of StorageTokenMetadata 4.4. GetAPIToken GET /v1/apitokens/{id} GetAPIToken returns API token metadata for a given id. 4.4.1. Description 4.4.2. Parameters 4.4.2.1. Path Parameters Name Description Required Default Pattern id X null 4.4.3. Return Type StorageTokenMetadata 4.4.4. Content Type application/json 4.4.5. Responses Table 4.4. HTTP Response Codes Code Message Datatype 200 A successful response. StorageTokenMetadata 0 An unexpected error response. RuntimeError 4.4.6. Samples 4.4.7. Common object reference 4.4.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 4.4.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 4.4.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 4.4.7.3. StorageTokenMetadata Field Name Required Nullable Type Description Format id String name String roles List of string issuedAt Date date-time expiration Date date-time revoked Boolean role String 4.5. RevokeToken PATCH /v1/apitokens/revoke/{id} RevokeToken removes the API token for a given id. 4.5.1. Description 4.5.2. Parameters 4.5.2.1. Path Parameters Name Description Required Default Pattern id X null 4.5.3. Return Type Object 4.5.4. Content Type application/json 4.5.5. Responses Table 4.5. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 4.5.6. Samples 4.5.7. Common object reference 4.5.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 4.5.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 4.5.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/apitokenservice
Chapter 4. Creating a Red Hat High-Availability cluster with Pacemaker
Chapter 4. Creating a Red Hat High-Availability cluster with Pacemaker Create a Red Hat High Availability two-node cluster using the pcs command-line interface with the following procedure. Configuring the cluster in this example requires that your system include the following components: 2 nodes, which will be used to create the cluster. In this example, the nodes used are z1.example.com and z2.example.com . Network switches for the private network. We recommend but do not require a private network for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches. A fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of zapc.example.com . Note You must ensure that your configuration conforms to Red Hat's support policies. For full information about Red Hat's support policies, requirements, and limitations for RHEL High Availability clusters, see Support Policies for RHEL High Availability Clusters . 4.1. Installing cluster software Install the cluster software and configure your system for cluster creation with the following procedure. Procedure On each node in the cluster, enable the repository for high availability that corresponds to your system architecture. For example, to enable the high availability repository for an x86_64 system, you can enter the following subscription-manager command: On each node in the cluster, install the Red Hat High Availability Add-On software packages along with all available fence agents from the High Availability channel. Alternatively, you can install the Red Hat High Availability Add-On software packages along with only the fence agent that you require with the following command. The following command displays a list of the available fence agents. Warning After you install the Red Hat High Availability Add-On packages, you should ensure that your software update preferences are set so that nothing is installed automatically. Installation on a running cluster can cause unexpected behaviors. For more information, see Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster . If you are running the firewalld daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On. Note You can determine whether the firewalld daemon is installed on your system with the rpm -q firewalld command. If it is installed, you can determine whether it is running with the firewall-cmd --state command. Note The ideal firewall configuration for cluster components depends on the local environment, where you may need to take into account such considerations as whether the nodes have multiple network interfaces or whether off-host firewalling is present. The example here, which opens the ports that are generally required by a Pacemaker cluster, should be modified to suit local conditions. Enabling ports for the High Availability Add-On shows the ports to enable for the Red Hat High Availability Add-On and provides an explanation for what each port is used for. In order to use pcs to configure the cluster and communicate among the nodes, you must set a password on each node for the user ID hacluster , which is the pcs administration account. It is recommended that the password for user hacluster be the same on each node. Before the cluster can be configured, the pcsd daemon must be started and enabled to start up on boot on each node. This daemon works with the pcs command to manage configuration across the nodes in the cluster. On each node in the cluster, execute the following commands to start the pcsd service and to enable pcsd at system start. 4.2. Installing the pcp-zeroconf package (recommended) When you set up your cluster, it is recommended that you install the pcp-zeroconf package for the Performance Co-Pilot (PCP) tool. PCP is Red Hat's recommended resource-monitoring tool for RHEL systems. Installing the pcp-zeroconf package allows you to have PCP running and collecting performance-monitoring data for the benefit of investigations into fencing, resource failures, and other events that disrupt the cluster. Note Cluster deployments where PCP is enabled will need sufficient space available for PCP's captured data on the file system that contains /var/log/pcp/ . Typical space usage by PCP varies across deployments, but 10Gb is usually sufficient when using the pcp-zeroconf default settings, and some environments may require less. Monitoring usage in this directory over a 14-day period of typical activity can provide a more accurate usage expectation. Procedure To install the pcp-zeroconf package, run the following command. This package enables pmcd and sets up data capture at a 10-second interval. For information about reviewing PCP data, see the Red Hat Knowledgebase solution Why did a RHEL High Availability cluster node reboot - and how can I prevent it from happening again? . 4.3. Creating a high availability cluster Create a Red Hat High Availability Add-On cluster with the following procedure. This example procedure creates a cluster that consists of the nodes z1.example.com and z2.example.com . Procedure Authenticate the pcs user hacluster for each node in the cluster on the node from which you will be running pcs . The following command authenticates user hacluster on z1.example.com for both of the nodes in a two-node cluster that will consist of z1.example.com and z2.example.com . Execute the following command from z1.example.com to create the two-node cluster my_cluster that consists of nodes z1.example.com and z2.example.com . This will propagate the cluster configuration files to both nodes in the cluster. This command includes the --start option, which will start the cluster services on both nodes in the cluster. Enable the cluster services to run on each node in the cluster when the node is booted. Note For your particular environment, you may choose to leave the cluster services disabled by skipping this step. This allows you to ensure that if a node goes down, any issues with your cluster or your resources are resolved before the node rejoins the cluster. If you leave the cluster services disabled, you will need to manually start the services when you reboot a node by executing the pcs cluster start command on that node. You can display the current status of the cluster with the pcs cluster status command. Because there may be a slight delay before the cluster is up and running when you start the cluster services with the --start option of the pcs cluster setup command, you should ensure that the cluster is up and running before performing any subsequent actions on the cluster and its configuration. 4.4. Creating a high availability cluster with multiple links You can use the pcs cluster setup command to create a Red Hat High Availability cluster with multiple links by specifying all of the links for each node. The format for the basic command to create a two-node cluster with two links is as follows. For the full syntax of this command, see the pcs (8) man page. When creating a cluster with multiple links, you should take the following into account. The order of the addr= address parameters is important. The first address specified after a node name is for link0 , the second one for link1 , and so forth. By default, if link_priority is not specified for a link, the link's priority is equal to the link number. The link priorities are then 0, 1, 2, 3, and so forth, according to the order specified, with 0 being the highest link priority. The default link mode is passive , meaning the active link with the lowest-numbered link priority is used. With the default values of link_mode and link_priority , the first link specified will be used as the highest priority link, and if that link fails the link specified will be used. It is possible to specify up to eight links using the knet transport protocol, which is the default transport protocol. All nodes must have the same number of addr= parameters. As of RHEL 8.1, it is possible to add, remove, and change links in an existing cluster using the pcs cluster link add , the pcs cluster link remove , the pcs cluster link delete , and the pcs cluster link update commands. As with single-link clusters, do not mix IPv4 and IPv6 addresses in one link, although you can have one link running IPv4 and the other running IPv6. As with single-link clusters, you can specify addresses as IP addresses or as names as long as the names resolve to IPv4 or IPv6 addresses for which IPv4 and IPv6 addresses are not mixed in one link. The following example creates a two-node cluster named my_twolink_cluster with two nodes, rh80-node1 and rh80-node2 . rh80-node1 has two interfaces, IP address 192.168.122.201 as link0 and 192.168.123.201 as link1 . rh80-node2 has two interfaces, IP address 192.168.122.202 as link0 and 192.168.123.202 as link1 . To set a link priority to a different value than the default value, which is the link number, you can set the link priority with the link_priority option of the pcs cluster setup command. Each of the following two example commands creates a two-node cluster with two interfaces where the first link, link 0, has a link priority of 1 and the second link, link 1, has a link priority of 0. Link 1 will be used first and link 0 will serve as the failover link. Since link mode is not specified, it defaults to passive. These two commands are equivalent. If you do not specify a link number following the link keyword, the pcs interface automatically adds a link number, starting with the lowest unused link number. You can set the link mode to a different value than the default value of passive with the link_mode option of the pcs cluster setup command, as in the following example. The following example sets both the link mode and the link priority. For information about adding nodes to an existing cluster with multiple links, see Adding a node to a cluster with multiple links . For information about changing the links in an existing cluster with multiple links, see Adding and modifying links in an existing cluster . 4.5. Configuring fencing You must configure a fencing device for each node in the cluster. For information about the fence configuration commands and options, see Configuring fencing in a Red Hat High Availability cluster . For general information about fencing and its importance in a Red Hat High Availability cluster, see the Red Hat Knowledgebase solution Fencing in a Red Hat High Availability Cluster . Note When configuring a fencing device, attention should be given to whether that device shares power with any nodes or devices in the cluster. If a node and its fence device do share power, then the cluster may be at risk of being unable to fence that node if the power to it and its fence device should be lost. Such a cluster should either have redundant power supplies for fence devices and nodes, or redundant fence devices that do not share power. Alternative methods of fencing such as SBD or storage fencing may also bring redundancy in the event of isolated power losses. Procedure This example uses the APC power switch with a host name of zapc.example.com to fence the nodes, and it uses the fence_apc_snmp fencing agent. Because both nodes will be fenced by the same fencing agent, you can configure both fencing devices as a single resource, using the pcmk_host_map option. You create a fencing device by configuring the device as a stonith resource with the pcs stonith create command. The following command configures a stonith resource named myapc that uses the fence_apc_snmp fencing agent for nodes z1.example.com and z2.example.com . The pcmk_host_map option maps z1.example.com to port 1, and z2.example.com to port 2. The login value and password for the APC device are both apc . By default, this device will use a monitor interval of sixty seconds for each node. Note that you can use an IP address when specifying the host name for the nodes. The following command displays the parameters of an existing fencing device. After configuring your fence device, you should test the device. For information about testing a fence device, see Testing a fence device . Note Do not test your fence device by disabling the network interface, as this will not properly test fencing. Note Once fencing is configured and a cluster has been started, a network restart will trigger fencing for the node which restarts the network even when the timeout is not exceeded. For this reason, do not restart the network service while the cluster service is running because it will trigger unintentional fencing on the node. 4.6. Backing up and restoring a cluster configuration The following commands back up a cluster configuration in a tar archive and restore the cluster configuration files on all nodes from the backup. Procedure Use the following command to back up the cluster configuration in a tar archive. If you do not specify a file name, the standard output will be used. Note The pcs config backup command backs up only the cluster configuration itself as configured in the CIB; the configuration of resource daemons is out of the scope of this command. For example if you have configured an Apache resource in the cluster, the resource settings (which are in the CIB) will be backed up, while the Apache daemon settings (as set in`/etc/httpd`) and the files it serves will not be backed up. Similarly, if there is a database resource configured in the cluster, the database itself will not be backed up, while the database resource configuration (CIB) will be. Use the following command to restore the cluster configuration files on all cluster nodes from the backup. Specifying the --local option restores the cluster configuration files only on the node from which you run this command. If you do not specify a file name, the standard input will be used. 4.7. Enabling ports for the High Availability Add-On The ideal firewall configuration for cluster components depends on the local environment, where you may need to take into account such considerations as whether the nodes have multiple network interfaces or whether off-host firewalling is present. If you are running the firewalld daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On. You may need to modify which ports are open to suit local conditions. Note You can determine whether the firewalld daemon is installed on your system with the rpm -q firewalld command. If the firewalld daemon is installed, you can determine whether it is running with the firewall-cmd --state command. The following table shows the ports to enable for the Red Hat High Availability Add-On and provides an explanation for what the port is used for. Table 4.1. Ports to Enable for High Availability Add-On Port When Required TCP 2224 Default pcsd port required on all nodes (needed by the pcsd Web UI and required for node-to-node communication). You can configure the pcsd port by means of the PCSD_PORT parameter in the /etc/sysconfig/pcsd file. It is crucial to open port 2224 in such a way that pcs from any node can talk to all nodes in the cluster, including itself. When using the Booth cluster ticket manager or a quorum device you must open port 2224 on all related hosts, such as Booth arbitrators or the quorum device host. TCP 3121 Required on all nodes if the cluster has any Pacemaker Remote nodes Pacemaker's pacemaker-based daemon on the full cluster nodes will contact the pacemaker_remoted daemon on Pacemaker Remote nodes at port 3121. If a separate interface is used for cluster communication, the port only needs to be open on that interface. At a minimum, the port should open on Pacemaker Remote nodes to full cluster nodes. Because users may convert a host between a full node and a remote node, or run a remote node inside a container using the host's network, it can be useful to open the port to all nodes. It is not necessary to open the port to any hosts other than nodes. TCP 5403 Required on the quorum device host when using a quorum device with corosync-qnetd . The default value can be changed with the -p option of the corosync-qnetd command. UDP 5404-5412 Required on corosync nodes to facilitate communication between nodes. It is crucial to open ports 5404-5412 in such a way that corosync from any node can talk to all nodes in the cluster, including itself. TCP 21064 Required on all nodes if the cluster contains any resources requiring DLM (such as GFS2 ). TCP 9929, UDP 9929 Required to be open on all cluster nodes and Booth arbitrator nodes to connections from any of those same nodes when the Booth ticket manager is used to establish a multi-site cluster.
[ "subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms", "yum install pcs pacemaker fence-agents-all", "yum install pcs pacemaker fence-agents- model", "rpm -q -a | grep fence fence-agents-rhevm-4.0.2-3.el7.x86_64 fence-agents-ilo-mp-4.0.2-3.el7.x86_64 fence-agents-ipmilan-4.0.2-3.el7.x86_64", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability", "passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.", "systemctl start pcsd.service systemctl enable pcsd.service", "yum install pcp-zeroconf", "pcs host auth z1.example.com z2.example.com Username: hacluster Password: z1.example.com: Authorized z2.example.com: Authorized", "pcs cluster setup my_cluster --start z1.example.com z2.example.com", "pcs cluster enable --all", "pcs cluster status Cluster Status: Stack: corosync Current DC: z2.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z2.example.com 2 Nodes configured 0 Resources configured", "pcs cluster setup pass:quotes[ cluster_name ] pass:quotes[ node1_name ] addr=pass:quotes[ node1_link0_address ] addr=pass:quotes[ node1_link1_address ] pass:quotes[ node2_name ] addr=pass:quotes[ node2_link0_address ] addr=pass:quotes[ node2_link1_address ]", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link link_priority=1 link link_priority=0 pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link linknumber=1 link_priority=0 link link_priority=1", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link_mode=active", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link_mode=active link link_priority=1 link link_priority=0", "pcs stonith create myapc fence_apc_snmp ipaddr=\"zapc.example.com\" pcmk_host_map=\"z1.example.com:1;z2.example.com:2\" login=\"apc\" passwd=\"apc\"", "pcs stonith config myapc Resource: myapc (class=stonith type=fence_apc_snmp) Attributes: ipaddr=zapc.example.com pcmk_host_map=z1.example.com:1;z2.example.com:2 login=apc passwd=apc Operations: monitor interval=60s (myapc-monitor-interval-60s)", "pcs config backup filename", "pcs config restore [--local] [ filename ]", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_creating-high-availability-cluster-configuring-and-managing-high-availability-clusters
Chapter 1. Selecting OpenShift AI administrator and user groups
Chapter 1. Selecting OpenShift AI administrator and user groups By default, all users authenticated in OpenShift can access OpenShift AI. Also by default, users with cluster-admin permissions are OpenShift AI administrators. A cluster admin is a superuser that can perform any action in any project in the OpenShift cluster. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. After a cluster admin user defines additional administrator and user groups in OpenShift, you can add those groups to OpenShift AI by selecting them in the OpenShift AI dashboard. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. The groups that you want to select as administrator and user groups for OpenShift AI already exist in OpenShift. For more information, see Managing users and groups . Procedure From the OpenShift AI dashboard, click Settings User management . Select your OpenShift AI administrator groups: Under Data science administrator groups , click the text box and select an OpenShift group. Repeat this process to define multiple administrator groups. Select your OpenShift AI user groups: Under Data science user groups , click the text box and select an OpenShift group. Repeat this process to define multiple user groups. Important The system:authenticated setting allows all users authenticated in OpenShift to access OpenShift AI. Click Save changes . Verification Administrator users can successfully log in to OpenShift AI and have access to the Settings navigation menu. Non-administrator users can successfully log in to OpenShift AI. They can also access and use individual components, such as projects and workbenches.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_resources/selecting-admin-and-user-groups_resource-mgmt
3.3. Non-blocking Statement Execution
3.3. Non-blocking Statement Execution JDBC query execution can indefinitely block the calling thread when a statement is executed or a resultset is being iterated. In some situations you may wish to have your calling threads held in these blocked states. When using embedded connections, you may optionally use the org.teiid.jdbc.TeiidStatement and org.teiid.jdbc.TeiidPreparedStatement interfaces to execute queries with a callback org.teiid.jdbc.StatementCallback that will be notified of statement events, such as an available row, an exception, or completion. Your calling thread will be free to perform other work. The callback will be executed by an engine processing thread as needed. If your results processing is blocking and you want query processing to run concurrently with results processing, then your callback should implement onRow handling in a multi-threaded manner to allow the engine thread to continue. Note The non-blocking logic is limited to statement execution only. Other JDBC operations, such as connection creation or batched executions do not yet have non-blocking options. If you access forward positions in the onRow method (calling , isLast, isAfterLast, absolute), they may not yet be valid and a org.teiid.jdbc.AsynchPositioningException will be thrown. That exception is recoverable if caught or can be avoided by calling TeiidResultSet.available() to determine if your desired positioning will be valid.
[ "PreparedStatement stmt = connection.prepareStatement(sql); TeiidPreparedStatement tStmt = stmt.unwrap(TeiidPreparedStatement.class); tStmt.submitExecute(new StatementCallback() { @Override public void onRow(Statement s, ResultSet rs) { //any logic that accesses the current row System.out.println(rs.getString(1)); } @Override public void onException(Statement s, Exception e) throws Exception { s.close(); } @Override public void onComplete(Statement s) throws Exception { s.close(); }, new RequestOptions() });" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/non-blocking_statement_execution1
Chapter 5. Working with platforms, portfolios, and products
Chapter 5. Working with platforms, portfolios, and products This section demonstrates the basic workflows for platforms, portfolios, and products to get started using automation services catalog We will demonstrate how to: create portfolios add products from the source platform into the portfolio set approval processes for the portfolio share the portfolio with users 5.1. Creating a portfolio and adding products to it To create a portfolio: Click Create portfolio . Provide a New Portfolio Name and Description . Choose an Approval workflow from the drop-down list. Click Save . Our initial portfolio is empty, and we can start adding products from this empty state. To add a product from a platform: Click in the Filter by Platform field to view connected platforms. Select a platform. The screen will populate with products from that platform. Check each product to add to the portfolio. Click Add . 5.2. Setting approval processes for the portfolio We have now created a portfolio and added some products to it. Our step is to set the approval process for the portfolio. Setting the approval process for our portfolio ensure that specific groups are designated to approve any orders placed by a user. To set an approval processes for the portfolio: Navigate to Portfolios . Click on a portfolio tile and select Set approval . From the Set approval process drop-down menu, select an approval processes to set to the portfolio. Click Save . 5.3. Sharing the portfolio with users The portfolio is now ready to share with groups of users. These designated user groups can order products collected in the portfolio, and their orders are approved or denied using the approval process set for the portfolio. To share a portfolio: Click Portfolios . Click on a portfolio, then click Share . The icon:[Y] indicates a shared portfolio. Adding groups: Under Invite group search groups to share the portfolio with. Select the level of permissions using the drop-down list. Edit sharing settings for existing groups: Under Groups with access adjust permissions using the drop-down for each group. To stop sharing a portfolio with a group, click icon:[X]. Click Send when finished. Our portfolio is now accessible to the groups of users, who can order the products we collected in it. We can now duplicate this portfolio, change its name and add new products, to share with different user groups. 5.4. Copying existing portfolios to edit and share with new groups Copying portfolios allows you to quickly duplicate an already organized portfolio to offer to different groups of users. Once a copy is created, you can add or remove products, share with new groups, and apply new approval processes to meet organizational requirements. Copying portfolios allows you to quickly duplicate an already organized portfolio to offer to different groups of users. Once a copy is created, you can add or remove products, share with new groups, and apply new approval processes to meet organizational requirements. We can also copy products from various portfolios to additional portfolios. 5.5. Copying products to different portfolios You can copy products from one portfolio to another as part of developing new portfolios to meet group needs. Once a product is copied you can edit its basic fields, update its survey, and apply different approval processes. 5.6. Workflows summary The workflows in this section of the guide demonstrated how to perform the basic actions for platforms, portfolios and products necessary to get started using automation services catalog: add entitlements to access Automation Service Catalog; create the necessary groups and users; connect to a source platform; create portfolios; add products from the source platform into the portfolio; set approval processes for the portfolio; share the portfolio with users; make a copy of our existing portfolio to share with different users; copy products from one portfolio to another. In the section, we will describe working with the survey editor, and provide an overview of working with requests and orders.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/getting_started_with_automation_services_catalog/working_with_platforms_portfolios_and_products
Chapter 15. Backing up and restoring Red Hat Quay on a standalone deployment
Chapter 15. Backing up and restoring Red Hat Quay on a standalone deployment Use the content within this section to back up and restore Red Hat Quay in standalone deployments. 15.1. Backing up Red Hat Quay on standalone deployments This procedure describes how to create a backup of Red Hat Quay on standalone deployments. Procedure Create a temporary backup directory, for example, quay-backup : USD mkdir /tmp/quay-backup The following example command denotes the local directory that the Red Hat Quay was started in, for example, /opt/quay-install : Change into the directory that bind-mounts to /conf/stack inside of the container, for example, /opt/quay-install , by running the following command: USD cd /opt/quay-install Compress the contents of your Red Hat Quay deployment into an archive in the quay-backup directory by entering the following command: USD tar cvf /tmp/quay-backup/quay-backup.tar.gz * Example output: config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key Back up the Quay container service by entering the following command: Redirect the contents of your conf/stack/config.yaml file to your temporary quay-config.yaml file by entering the following command: USD podman exec -it quay cat /conf/stack/config.yaml > /tmp/quay-backup/quay-config.yaml Obtain the DB_URI located in your temporary quay-config.yaml by entering the following command: USD grep DB_URI /tmp/quay-backup/quay-config.yaml Example output: Extract the PostgreSQL contents to your temporary backup directory in a backup .sql file by entering the following command: USD pg_dump -h 172.24.10.50 -p 5432 -d quay -U <username> -W -O > /tmp/quay-backup/quay-backup.sql Print the contents of your DISTRIBUTED_STORAGE_CONFIG by entering the following command: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name> Export the AWS_ACCESS_KEY_ID by using the access_key credential obtained in Step 7: USD export AWS_ACCESS_KEY_ID=<access_key> Export the AWS_SECRET_ACCESS_KEY by using the secret_key obtained in Step 7: USD export AWS_SECRET_ACCESS_KEY=<secret_key> Sync the quay bucket to the /tmp/quay-backup/blob-backup/ directory from the hostname of your DISTRIBUTED_STORAGE_CONFIG : USD aws s3 sync s3://<bucket_name> /tmp/quay-backup/blob-backup/ --source-region us-east-2 Example output: It is recommended that you delete the quay-config.yaml file after syncing the quay bucket because it contains sensitive information. The quay-config.yaml file will not be lost because it is backed up in the quay-backup.tar.gz file. 15.2. Restoring Red Hat Quay on standalone deployments This procedure describes how to restore Red Hat Quay on standalone deployments. Prerequisites You have backed up your Red Hat Quay deployment. Procedure Create a new directory that will bind-mount to /conf/stack inside of the Red Hat Quay container: USD mkdir /opt/new-quay-install Copy the contents of your temporary backup directory created in Backing up Red Hat Quay on standalone deployments to the new-quay-install1 directory created in Step 1: USD cp /tmp/quay-backup/quay-backup.tar.gz /opt/new-quay-install/ Change into the new-quay-install directory by entering the following command: USD cd /opt/new-quay-install/ Extract the contents of your Red Hat Quay directory: USD tar xvf /tmp/quay-backup/quay-backup.tar.gz * Example output: Recall the DB_URI from your backed-up config.yaml file by entering the following command: USD grep DB_URI config.yaml Example output: postgresql://<username>:[email protected]/quay Run the following command to enter the PostgreSQL database server: USD sudo postgres Enter psql and create a new database in 172.24.10.50 to restore the quay databases, for example, example_restore_registry_quay_database , by entering the following command: USD psql "host=172.24.10.50 port=5432 dbname=postgres user=<username> password=test123" postgres=> CREATE DATABASE example_restore_registry_quay_database; Example output: Connect to the database by running the following command: postgres=# \c "example-restore-registry-quay-database"; Example output: You are now connected to database "example-restore-registry-quay-database" as user "postgres". Create a pg_trmg extension of your Quay database by running the following command: example_restore_registry_quay_database=> CREATE EXTENSION IF NOT EXISTS pg_trgm; Example output: CREATE EXTENSION Exit the postgres CLI by entering the following command: \q Import the database backup to your new database by running the following command: USD psql "host=172.24.10.50 port=5432 dbname=example_restore_registry_quay_database user=<username> password=test123" -W < /tmp/quay-backup/quay-backup.sql Example output: Update the value of DB_URI in your config.yaml from postgresql://<username>:[email protected]/quay to postgresql://<username>:[email protected]/example-restore-registry-quay-database before restarting the Red Hat Quay deployment. Note The DB_URI format is DB_URI postgresql://<login_user_name>:<login_user_password>@<postgresql_host>/<quay_database> . If you are moving from one PostgreSQL server to another PostgreSQL server, update the value of <login_user_name> , <login_user_password> and <postgresql_host> at the same time. In the /opt/new-quay-install directory, print the contents of your DISTRIBUTED_STORAGE_CONFIG bundle: USD cat config.yaml | grep DISTRIBUTED_STORAGE_CONFIG -A10 Example output: DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name> Note Your DISTRIBUTED_STORAGE_CONFIG in /opt/new-quay-install must be updated before restarting your Red Hat Quay deployment. Export the AWS_ACCESS_KEY_ID by using the access_key credential obtained in Step 13: USD export AWS_ACCESS_KEY_ID=<access_key> Export the AWS_SECRET_ACCESS_KEY by using the secret_key obtained in Step 13: USD export AWS_SECRET_ACCESS_KEY=<secret_key> Create a new s3 bucket by entering the following command: USD aws s3 mb s3://<new_bucket_name> --region us-east-2 Example output: USD make_bucket: quay Upload all blobs to the new s3 bucket by entering the following command: USD aws s3 sync --no-verify-ssl \ --endpoint-url <example_endpoint_url> 1 /tmp/quay-backup/blob-backup/. s3://quay/ 1 The Red Hat Quay registry endpoint must be the same before backup and after restore. Example output: upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d to s3://quay/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 to s3://quay/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec to s3://quay/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec ... Before restarting your Red Hat Quay deployment, update the storage settings in your config.yaml: DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <new_bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name>
[ "mkdir /tmp/quay-backup", "podman run --name quay-app -v /opt/quay-install/config:/conf/stack:Z -v /opt/quay-install/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.9.10", "cd /opt/quay-install", "tar cvf /tmp/quay-backup/quay-backup.tar.gz *", "config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key", "podman inspect quay-app | jq -r '.[0].Config.CreateCommand | .[]' | paste -s -d ' ' - /usr/bin/podman run --name quay-app -v /opt/quay-install/config:/conf/stack:Z -v /opt/quay-install/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.9.10", "podman exec -it quay cat /conf/stack/config.yaml > /tmp/quay-backup/quay-config.yaml", "grep DB_URI /tmp/quay-backup/quay-config.yaml", "postgresql://<username>:[email protected]/quay", "pg_dump -h 172.24.10.50 -p 5432 -d quay -U <username> -W -O > /tmp/quay-backup/quay-backup.sql", "DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name>", "export AWS_ACCESS_KEY_ID=<access_key>", "export AWS_SECRET_ACCESS_KEY=<secret_key>", "aws s3 sync s3://<bucket_name> /tmp/quay-backup/blob-backup/ --source-region us-east-2", "download: s3://<user_name>/registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a to registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a download: s3://<user_name>/registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d to registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d", "mkdir /opt/new-quay-install", "cp /tmp/quay-backup/quay-backup.tar.gz /opt/new-quay-install/", "cd /opt/new-quay-install/", "tar xvf /tmp/quay-backup/quay-backup.tar.gz *", "config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key", "grep DB_URI config.yaml", "postgresql://<username>:[email protected]/quay", "sudo postgres", "psql \"host=172.24.10.50 port=5432 dbname=postgres user=<username> password=test123\" postgres=> CREATE DATABASE example_restore_registry_quay_database;", "CREATE DATABASE", "postgres=# \\c \"example-restore-registry-quay-database\";", "You are now connected to database \"example-restore-registry-quay-database\" as user \"postgres\".", "example_restore_registry_quay_database=> CREATE EXTENSION IF NOT EXISTS pg_trgm;", "CREATE EXTENSION", "\\q", "psql \"host=172.24.10.50 port=5432 dbname=example_restore_registry_quay_database user=<username> password=test123\" -W < /tmp/quay-backup/quay-backup.sql", "SET SET SET SET SET", "cat config.yaml | grep DISTRIBUTED_STORAGE_CONFIG -A10", "DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name>", "export AWS_ACCESS_KEY_ID=<access_key>", "export AWS_SECRET_ACCESS_KEY=<secret_key>", "aws s3 mb s3://<new_bucket_name> --region us-east-2", "make_bucket: quay", "aws s3 sync --no-verify-ssl --endpoint-url <example_endpoint_url> 1 /tmp/quay-backup/blob-backup/. s3://quay/", "upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d to s3://quay/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 to s3://quay/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec to s3://quay/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec", "DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <new_bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name>" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/standalone-deployment-backup-restore
4.250. python-slip
4.250. python-slip 4.250.1. RHBA-2012:0413 - python-slip bug fix update Updated python-slip packages that fix one bug are now available for Red Hat Enterprise Linux 6. The Simple Library for Python (SLIP) packages contain miscellaneous code for convenience, extension and workaround purposes. The python-slip packages have been upgraded to upstream version 0.2.20, which provides a number of bug fixes over the version. In addition, this update fixes a bug causing versions of python-slip to incorrectly determine whether SELinux was enabled or not. Therefore, convenience functions for writing files always attempted to set SELinux labels even if SELinux was disabled. This could cause for example the system-config-date tool to fail to change settings. (BZ# 796323 ) All users of python-slip are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/python-slip
Chapter 4. Configure storage for OpenShift Container Platform services
Chapter 4. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as the following: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have a plenty of storage capacity for the following OpenShift services that you configure: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) OpenShift tracing platform (Tempo) If the storage for these critical services runs out of space, the OpenShift cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 4.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On AWS, it is not required to change the storage for the registry. However, it is recommended to change the storage to OpenShift Data Foundation Persistent Volume for vSphere and Bare metal platforms. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 4.2. Using Multicloud Object Gateway as OpenShift Image Registry backend storage You can use Multicloud Object Gateway (MCG) as OpenShift Container Platform (OCP) Image Registry backend storage in an on-prem OpenShift deployment. To configure MCG as a backend storage for the OCP image registry, follow the steps mentioned in the procedure. Prerequisites Administrative access to OCP Web Console. A running OpenShift Data Foundation cluster with MCG. Procedure Create ObjectBucketClaim by following the steps in Creating Object Bucket Claim . Create an image-registry-private-configuration-user secret. Go to the OpenShift web-console. Click ObjectBucketClaim --> ObjectBucketClaim Data . In the ObjectBucketClaim data , look for MCG access key and MCG secret key in the openshift-image-registry namespace . Create the secret using the following command: Change the status of managementState of Image Registry Operator to Managed . Edit the spec.storage section of Image Registry Operator configuration file: Get the unique-bucket-name and regionEndpoint under the Object Bucket Claim Data section from the Web Console OR you can also get the information on regionEndpoint and unique-bucket-name from the command: Add regionEndpoint as http://<Endpoint-name>:<port> if the storageclass is ceph-rgw storageclass and the endpoint points to the internal SVC from the openshift-storage namespace. An image-registry pod spawns after you make the changes to the Operator registry configuration file. Reset the image registry settings to default. Verification steps Run the following command to check if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output (Optional) You can the run the following command to verify if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output 4.3. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 4.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 4.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 4.3. Persistent Volume Claims attached to prometheus-k8s-* pod 4.4. Overprovision level policy control [Technology Preview] Overprovision control is a mechanism that enables you to define a quota on the amount of Persistent Volume Claims (PVCs) consumed from a storage cluster, based on the specific application namespace. When you enable the overprovision control mechanism, it prevents you from overprovisioning the PVCs consumed from the storage cluster. OpenShift provides flexibility for defining constraints that limit the aggregated resource consumption at cluster scope with the help of ClusterResourceQuota . For more information see, OpenShift ClusterResourceQuota . With overprovision control, a ClusteResourceQuota is initiated, and you can set the storage capacity limit for each storage class. The alarm triggers when 80% of the capacity limit is consumed. Note Overprovision level policy control is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, refer to Technology Preview Features Support Scope. For more information about OpenShift Data Foundation deployment, refer to Product Documentation and select the deployment procedure according to the platform. Prerequisites Ensure that the OpenShift Data Foundation cluster is created. Procedure Deploy storagecluster either from the command line interface or the user interface. Label the application namespace. <desired_name> Specify a name for the application namespace, for example, quota-rbd . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Edit the storagecluster to set the quota limit on the storage class. <ocs_storagecluster_name> Specify the name of the storage cluster. Add an entry for Overprovision Control with the desired hard limit into the StorageCluster.Spec : <desired_quota_limit> Specify a desired quota limit for the storage class, for example, 27Ti . <storage_class_name> Specify the name of the storage class for which you want to set the quota limit, for example, ocs-storagecluster-ceph-rbd . <desired_quota_name> Specify a name for the storage quota, for example, quota1 . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Save the modified storagecluster . Verify that the clusterresourcequota is defined. Note Expect the clusterresourcequota with the quotaName that you defined in the step, for example, quota1 . 4.5. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 4.5.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 4.5.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 4.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide.
[ "storage: pvc: claim: <new-pvc-name>", "storage: pvc: claim: ocs4registry", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=<MCG Accesskey> --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=<MCG Secretkey> --namespace openshift-image-registry", "oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\": {\"managementState\": \"Managed\"}}'", "oc describe noobaa", "oc edit configs.imageregistry.operator.openshift.io -n openshift-image-registry apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: [..] name: cluster spec: [..] storage: s3: bucket: <Unique-bucket-name> region: us-east-1 (Use this region as default) regionEndpoint: https://<Endpoint-name>:<port> virtualHostedStyle: false", "oc get pods -n openshift-image-registry", "oc get pods -n openshift-image-registry", "oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-56d78bc5fb-bxcgv 2/2 Running 0 44d image-pruner-1605830400-29r7k 0/1 Completed 0 10h image-registry-b6c8f4596-ln88h 1/1 Running 0 17d node-ca-2nxvz 1/1 Running 0 44d node-ca-dtwjd 1/1 Running 0 44d node-ca-h92rj 1/1 Running 0 44d node-ca-k9bkd 1/1 Running 0 44d node-ca-stkzc 1/1 Running 0 44d node-ca-xn8h4 1/1 Running 0 44d", "oc describe pod <image-registry-name>", "oc describe pod image-registry-b6c8f4596-ln88h Environment: REGISTRY_STORAGE_S3_REGIONENDPOINT: http://s3.openshift-storage.svc REGISTRY_STORAGE: s3 REGISTRY_STORAGE_S3_BUCKET: bucket-registry-mcg REGISTRY_STORAGE_S3_REGION: us-east-1 REGISTRY_STORAGE_S3_ENCRYPT: true REGISTRY_STORAGE_S3_VIRTUALHOSTEDSTYLE: false REGISTRY_STORAGE_S3_USEDUALSTACK: true REGISTRY_STORAGE_S3_ACCESSKEY: <set to the key 'REGISTRY_STORAGE_S3_ACCESSKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_STORAGE_S3_SECRETKEY: <set to the key 'REGISTRY_STORAGE_S3_SECRETKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: 57b943f691c878e342bac34e657b702bd6ca5488d51f839fecafa918a79a5fc6ed70184cab047601403c1f383e54d458744062dcaaa483816d82408bb56e686f REGISTRY_LOG_LEVEL: info REGISTRY_OPENSHIFT_QUOTA_ENABLED: true REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR: inmemory REGISTRY_STORAGE_DELETE_ENABLED: true REGISTRY_OPENSHIFT_METRICS_ENABLED: true REGISTRY_OPENSHIFT_SERVER_ADDR: image-registry.openshift-image-registry.svc:5000 REGISTRY_HTTP_TLS_CERTIFICATE: /etc/secrets/tls.crt REGISTRY_HTTP_TLS_KEY: /etc/secrets/tls.key", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, e.g. 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>", "apiVersion: v1 kind: Namespace metadata: name: <desired_name> labels: storagequota: <desired_label>", "oc edit storagecluster -n openshift-storage <ocs_storagecluster_name>", "apiVersion: ocs.openshift.io/v1 kind: StorageCluster spec: [...] overprovisionControl: - capacity: <desired_quota_limit> storageClassName: <storage_class_name> quotaName: <desired_quota_name> selector: labels: matchLabels: storagequota: <desired_label> [...]", "oc get clusterresourcequota -A oc describe clusterresourcequota -A", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd", "config.yaml: | openshift-storage: delete: days: 5" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_and_allocating_storage_resources/configure-storage-for-openshift-container-platform-services_rhodf
function::user_string_n2
function::user_string_n2 Name function::user_string_n2 - Retrieves string of given length from user space Synopsis Arguments addr the user space address to retrieve the string from n the maximum length of the string (if not null terminated) err_msg the error message to return when data isn't available Description Returns the C string of a maximum given length from a given user space address. Returns the given error message string on the rare cases when userspace data is not accessible at the given address.
[ "user_string_n2:string(addr:long,n:long,err_msg:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-string-n2
Chapter 8. Monitoring a high availability Red Hat Ceph Storage cluster
Chapter 8. Monitoring a high availability Red Hat Ceph Storage cluster When you deploy an overcloud with Red Hat Ceph Storage, Red Hat OpenStack Platform uses the ceph-mon monitor daemon to manage the Ceph cluster. Director deploys the daemon on all Controller nodes. 8.1. Checking Red Hat Ceph monitoring service status To check the status of the Red Hat Ceph Storage monitoring service, log in to a Controller node and run the service ceph status command. Procedure Log in to a Controller node and check that the Ceph Monitoring service is running: 8.2. Checking Red Hat Ceph monitoring configuration To check the configuration of the Red Hat Ceph Storage monitoring service, log in to a Controller node or a Red Hat Ceph node and open the /etc/ceph/ceph.conf file. Procedure Log in to a Controller nodes or on a Ceph node and open the /etc/ceph/ceph.conf file to view the monitoring configuration parameters: This example shows the following information: All three Controller nodes are configured to monitor the Red Hat Ceph Storage cluster with the mon_initial_members parameter. The 172.19.0.11/24 network is configured to provide a communication path between the Controller nodes and the Red Hat Ceph Storage nodes. The Red Hat Ceph Storage nodes are assigned to a separate network from the Controller nodes, and the IP addresses for the monitoring Controller nodes are 172.18.0.15 , 172.18.0.16 , and 172.18.0.17 . 8.3. Checking Red Hat Ceph node status To check the status of a specific Red Hat Ceph Storage node, log in to the node and run the ceph -s command. Procedure Log in to the Ceph node and run the ceph -s command: This example output shows that the health parameter value is HEALTH_OK , which indicates that the Ceph node is active and healthy. The output also shows three Ceph monitor services that are running on the three overcloud-controller nodes and the IP addresses and ports of the services. 8.4. Additional resources Red Hat Ceph product page
[ "sudo service ceph status === mon. overcloud-controller-0 === mon. overcloud-controller-0 : running {\"version\":\"0.94.1\"}", "[global] osd_pool_default_pgp_num = 128 osd_pool_default_min_size = 1 auth_service_required = cephx mon_initial_members = overcloud-controller-0 , overcloud-controller-1 , overcloud-controller-2 fsid = 8c835acc-6838-11e5-bb96-2cc260178a92 cluster_network = 172.19.0.11/24 auth_supported = cephx auth_cluster_required = cephx mon_host = 172.18.0.17,172.18.0.15,172.18.0.16 auth_client_required = cephx osd_pool_default_size = 3 osd_pool_default_pg_num = 128 public_network = 172.18.0.17/24", "ceph -s cluster 8c835acc-6838-11e5-bb96-2cc260178a92 health HEALTH_OK monmap e1: 3 mons at { overcloud-controller-0 =172.18.0.17:6789/0, overcloud-controller-1 =172.18.0.15:6789/0, overcloud-controller-2 =172.18.0.16:6789/0} election epoch 152, quorum 0,1,2 overcloud-controller-1 , overcloud-controller-2 , overcloud-controller-0 osdmap e543: 6 osds: 6 up, 6 in pgmap v1736: 256 pgs, 4 pools, 0 bytes data, 0 objects 267 MB used, 119 GB / 119 GB avail 256 active+clean" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/high_availability_deployment_and_usage/assembly_monitoring-ha-ceph-cluster_rhosp
Appendix C. Using AMQ Broker with the examples
Appendix C. Using AMQ Broker with the examples The AMQ Python examples require a running message broker with a queue named examples . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named examples . USD <broker-instance-dir> /bin/artemis queue create --name examples --address examples --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2020-10-08 11:25:07 UTC
[ "<broker-instance-dir> /bin/artemis run", "example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live", "<broker-instance-dir> /bin/artemis queue create --name examples --address examples --auto-create-address --anycast", "<broker-instance-dir> /bin/artemis stop" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_python_client/using_the_broker_with_the_examples
Chapter 15. Testing Your Models
Chapter 15. Testing Your Models 15.1. Manage Connection Profiles 15.1.1. Manage Connection Profiles As described earlier, you can test your models in Teiid Designer by using the Preview Data action or test your models via your deployable VDB. Teiid Designer utilizes the Eclipse Data Tools Platform (DTP) Connection Profile framework for connection management. Connection Profiles provide a mechanism to connect to JDBC and non-JDBC sources to access metadata for constructing metadata source models. Teiid Designer also provides a custom Teiid connection profile template designed as a JDBC source to a deployed VDB. By selecting various Teiid Designer Import options, any applicable Connection Profiles you have defined in your Database Development perspective will be available to use as your import source. From these import wizards you can also create new connection profiles or edit existing connection profiles without leaving the wizard. The Server view provides access to running Teiid instances and shows data source and VDB artifacts deployed there. The Create Data Source action available on this view utilizes the available and applicable connection profiles. 15.1.2. Set Connection Profile for Source Model Teiid Designer integrates Data Tools Connection Profiles by persisting pertinent connection information in each source model. This can occur through Importing process or through the Modeling > Set Connection Profile action. 15.1.3. View Connection Profile for Source Model In addition to setting the connection profile on a source model you can also view a source model's connection profile information via the Modeling > View Connection Profile Info action which displays the detailed properties of the connection. Figure 15.1. Connection Profile Information Dialog (in this image is used flat file connection profile) Note If a source model has no associated connection profile the following dialog will be displayed. Figure 15.2. No Connection Info Dialog 15.1.4. Remove Connection Profile from Source Model As a user, you may not want this connection information (i.e. URL, username, etc...) shared through your VDB. Designer provides a means to remove this connection information via a Modeling > Remove Connection Info action. When adding a source model without connection information will require the user to supply or select the correct translator type.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/chap-testing_your_models
Chapter 3. Configuring the internal OAuth server
Chapter 3. Configuring the internal OAuth server 3.1. OpenShift Container Platform OAuth server The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 3.2. OAuth token request flows and responses The OAuth server supports standard authorization code grant and the implicit grant OAuth authorization flows. When requesting an OAuth token using the implicit grant flow ( response_type=token ) with a client_id configured to request WWW-Authenticate challenges (like openshift-challenging-client ), these are the possible server responses from /oauth/authorize , and how they should be handled: Status Content Client response 302 Location header containing an access_token parameter in the URL fragment ( RFC 6749 section 4.2.2 ) Use the access_token value as the OAuth token. 302 Location header containing an error query parameter ( RFC 6749 section 4.1.2.1 ) Fail, optionally surfacing the error (and optional error_description ) query values to the user. 302 Other Location header Follow the redirect, and process the result using these rules. 401 WWW-Authenticate header present Respond to challenge if type is recognized (e.g. Basic , Negotiate , etc), resubmit request, and process the result using these rules. 401 WWW-Authenticate header missing No challenge authentication is possible. Fail and show response body (which might contain links or details on alternate methods to obtain an OAuth token). Other Other Fail, optionally surfacing response body to the user. 3.3. Options for the internal OAuth server Several configuration options are available for the internal OAuth server. 3.3.1. OAuth token duration options The internal OAuth server generates two kinds of tokens: Token Description Access tokens Longer-lived tokens that grant access to the API. Authorize codes Short-lived tokens whose only use is to be exchanged for an access token. You can configure the default duration for both types of token. If necessary, you can override the duration of the access token by using an OAuthClient object definition. 3.3.2. OAuth grant options When the OAuth server receives token requests for a client to which the user has not previously granted permission, the action that the OAuth server takes is dependent on the OAuth client's grant strategy. The OAuth client requesting token must provide its own grant strategy. You can apply the following default methods: Grant option Description auto Auto-approve the grant and retry the request. prompt Prompt the user to approve or deny the grant. 3.4. Configuring the internal OAuth server's token duration You can configure default options for the internal OAuth server's token duration. Important By default, tokens are only valid for 24 hours. Existing sessions expire after this time elapses. If the default time is insufficient, then this can be modified using the following procedure. Procedure Create a configuration file that contains the token duration options. The following file sets this to 48 hours, twice the default. apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1 1 Set accessTokenMaxAgeSeconds to control the lifetime of access tokens. The default lifetime is 24 hours, or 86400 seconds. This attribute cannot be negative. If set to zero, the default lifetime is used. Apply the new configuration file: Note Because you update the existing OAuth server, you must use the oc apply command to apply the change. USD oc apply -f </path/to/file.yaml> Confirm that the changes are in effect: USD oc describe oauth.config.openshift.io/cluster Example output ... Spec: Token Config: Access Token Max Age Seconds: 172800 ... 3.5. Configuring token inactivity timeout for the internal OAuth server You can configure OAuth tokens to expire after a set period of inactivity. By default, no token inactivity timeout is set. Note If the token inactivity timeout is also configured in your OAuth client, that value overrides the timeout that is set in the internal OAuth server configuration. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have configured an identity provider (IDP). Procedure Update the OAuth configuration to set a token inactivity timeout. Edit the OAuth object: USD oc edit oauth cluster Add the spec.tokenConfig.accessTokenInactivityTimeout field and set your timeout value: apiVersion: config.openshift.io/v1 kind: OAuth metadata: ... spec: tokenConfig: accessTokenInactivityTimeout: 400s 1 1 Set a value with the appropriate units, for example 400s for 400 seconds, or 30m for 30 minutes. The minimum allowed timeout value is 300s . Save the file to apply the changes. Check that the OAuth server pods have restarted: USD oc get clusteroperators authentication Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 145m Check that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes. USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.9.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. Verification Log in to the cluster with an identity from your IDP. Execute a command and verify that it was successful. Wait longer than the configured timeout without using the identity. In this procedure's example, wait longer than 400 seconds. Try to execute a command from the same identity's session. This command should fail because the token should have expired due to inactivity longer than the configured timeout. Example output error: You must be logged in to the server (Unauthorized) 3.6. Customizing the internal OAuth server URL You can customize the internal OAuth server URL by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Warning If you update the internal OAuth server URL, you might break trust from components in the cluster that need to communicate with the OpenShift OAuth server to retrieve OAuth access tokens. Components that need to trust the OAuth server will need to include the proper CA bundle when calling OAuth endpoints. For example: USD oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1 1 For self-signed certificates, the ca.crt file must contain the custom CA certificate, otherwise the login will not succeed. The Cluster Authentication Operator publishes the OAuth server's serving certificate in the oauth-serving-cert config map in the openshift-config-managed namespace. You can find the certificate in the data.ca-bundle.crt key of the config map. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. 3.7. OAuth server metadata Applications running in OpenShift Container Platform might have to discover information about the built-in OAuth server. For example, they might have to discover what the address of the <namespace_route> is without manual configuration. To aid in this, OpenShift Container Platform implements the IETF OAuth 2.0 Authorization Server Metadata draft specification. Thus, any application running inside the cluster can issue a GET request to https://openshift.default.svc/.well-known/oauth-authorization-server to fetch the following information: 1 The authorization server's issuer identifier, which is a URL that uses the https scheme and has no query or fragment components. This is the location where .well-known RFC 5785 resources containing information about the authorization server are published. 2 URL of the authorization server's authorization endpoint. See RFC 6749 . 3 URL of the authorization server's token endpoint. See RFC 6749 . 4 JSON array containing a list of the OAuth 2.0 RFC 6749 scope values that this authorization server supports. Note that not all supported scope values are advertised. 5 JSON array containing a list of the OAuth 2.0 response_type values that this authorization server supports. The array values used are the same as those used with the response_types parameter defined by "OAuth 2.0 Dynamic Client Registration Protocol" in RFC 7591 . 6 JSON array containing a list of the OAuth 2.0 grant type values that this authorization server supports. The array values used are the same as those used with the grant_types parameter defined by OAuth 2.0 Dynamic Client Registration Protocol in RFC 7591 . 7 JSON array containing a list of PKCE RFC 7636 code challenge methods supported by this authorization server. Code challenge method values are used in the code_challenge_method parameter defined in Section 4.3 of RFC 7636 . The valid code challenge method values are those registered in the IANA PKCE Code Challenge Methods registry. See IANA OAuth Parameters . 3.8. Troubleshooting OAuth API events In some cases the API server returns an unexpected condition error message that is difficult to debug without direct access to the API master log. The underlying reason for the error is purposely obscured in order to avoid providing an unauthenticated user with information about the server's state. A subset of these errors is related to service account OAuth configuration issues. These issues are captured in events that can be viewed by non-administrator users. When encountering an unexpected condition server error during OAuth, run oc get events to view these events under ServiceAccount . The following example warns of a service account that is missing a proper OAuth redirect URI: USD oc get events | grep ServiceAccount Example output 1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> Running oc describe sa/<service_account_name> reports any OAuth events associated with the given service account name. USD oc describe sa/proxy | grep -A5 Events Example output Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> The following is a list of the possible event errors: No redirect URI annotations or an invalid URI is specified Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> Invalid route specified Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io "<name>" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>] Invalid reference type specified Reason Message NoSAOAuthRedirectURIs [no kind "<name>" is registered for version "v1", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>] Missing SA tokens Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens
[ "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1", "oc apply -f </path/to/file.yaml>", "oc describe oauth.config.openshift.io/cluster", "Spec: Token Config: Access Token Max Age Seconds: 172800", "oc edit oauth cluster", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: spec: tokenConfig: accessTokenInactivityTimeout: 400s 1", "oc get clusteroperators authentication", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 145m", "oc get clusteroperators kube-apiserver", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.9.0 True False False 145m", "error: You must be logged in to the server (Unauthorized)", "oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "{ \"issuer\": \"https://<namespace_route>\", 1 \"authorization_endpoint\": \"https://<namespace_route>/oauth/authorize\", 2 \"token_endpoint\": \"https://<namespace_route>/oauth/token\", 3 \"scopes_supported\": [ 4 \"user:full\", \"user:info\", \"user:check-access\", \"user:list-scoped-projects\", \"user:list-projects\" ], \"response_types_supported\": [ 5 \"code\", \"token\" ], \"grant_types_supported\": [ 6 \"authorization_code\", \"implicit\" ], \"code_challenge_methods_supported\": [ 7 \"plain\", \"S256\" ] }", "oc get events | grep ServiceAccount", "1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "oc describe sa/proxy | grep -A5 Events", "Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io \"<name>\" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]", "Reason Message NoSAOAuthRedirectURIs [no kind \"<name>\" is registered for version \"v1\", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]", "Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/authentication_and_authorization/configuring-internal-oauth
Managing storage devices
Managing storage devices Red Hat Enterprise Linux 9 Configuring and managing local and remote storage devices Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/index
6.3. Upgrading from Red Hat Enterprise Linux 6.X to Red Hat Enterprise Linux 7.X
6.3. Upgrading from Red Hat Enterprise Linux 6.X to Red Hat Enterprise Linux 7.X Install migration tool Disable all repositories Disable all the enabled repositories: Download the operating system as an ISO file Follow these steps to download the latest ISO file for one of the following operating systems. Red Hat Enterprise Linux 7 based Red Hat Gluster Storage 3.5 Visit the Red Hat Customer Service Portal at https://access.redhat.com/login and enter your user name and password to log in. Click Downloads to go to the Software & Download Center. Click Red Hat Gluster Storage. Select 3.5 for RHEL 7 (latest) from the Version dropdown menu. Important If you are upgrading to Red Hat Gluster Storage 3.5, select version 3.4 for RHEL 7 . Bug 1762637 means that the Preupgrade Assistant only allows upgrading to Red Hat Enterprise Linux 7.6 at this time. Click Download Now beside the Red Hat Gluster Storage Server 3.5 on RHEL 7 Installation DVD. Red Hat Enterprise Linux 7 Visit the Red Hat Customer Service Portal at https://access.redhat.com/login and enter your user name and password to log in. Click Downloads to go to the Software & Download Center. Click Versions 7 and below, beside Red Hat Enterprise Linux. Select 7. x (latest) from the Version dropdown menu. Important If you are upgrading to Red Hat Gluster Storage 3.5, select version 7.6 . Bug 1762637 means that the Preupgrade Assistant only allows upgrading to Red Hat Enterprise Linux 7.6 at this time. Click Download Now beside the Red Hat Enterprise Linux 7.x Binary DVD. Upgrade to Red Hat Enterprise Linux 7 using ISO Upgrade to Red Hat Enterprise Linux 7 using the Red Hat upgrade tool and reboot after the upgrade process is completed: Important The upgrade process is time-consuming depending on your system's configuration and amount of data.
[ "yum install redhat-upgrade-tool yum install yum-utils", "yum-config-manager --disable \\*", "redhat-upgrade-tool --iso ISO_filepath --cleanup-post reboot" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/rhel6_to_rhel7
Chapter 27. Email Notifications
Chapter 27. Email Notifications Different events related to billing trigger email notifications for API providers and developers. 27.1. Provider notifications The users of the 3scale account (admins and members with Billing permission) can subscribe to or unsubscribe from the notifications related to billing at Account Settings (gear icon on the top-right) > Personal > Notification Preferences , under Billing section: Action required: review invoices Sent a few days before end of billing cycle so you can review invoices before they are being sent to customers. Customer downgraded Sent when a customer changes to a plan with a lower monthly fixed price. Expiring credit card Sent when a customer's credit card is about to expire. Payment error (retry) Sent when payment fails, resulting in an unpaid invoice and a retry. Payment error (final) Sent when the final retry of a payment fails, resulting in a failed invoice. All admin users of the 3scale account will receive notifications regarding billing, if they are subscribed to them. 27.2. Developer emails The email notifications sent to the developer accounts can be configured at Audience > Messages > Email Templates . The following emails are available: Credit card expired notification for buyer Sent when the credit card is due to expire soon. Invoice charged successfully for buyer Sent when the invoice has been successfully charged. Invoice charge failure for buyer with retry Sent when the invoice charge has failed, and the invoice is in Failed state, which means that the charging will be retried again. Invoice charge failure for buyer without retry Invoice charge has failed for the 3rd time, the invoice has passed to Unpaid state and will not be retried again. Upcoming invoice charge for buyer Sent when the invoice is issued for the developer. All admin users of the developer account will receive the above notifications. 27.2.1. Billing email address You can configure an email address that your customers can contact for resolving any issues with billing, for example, [email protected], in Audience > Messages > Support Emails in the Finance support email field. The email templates reference the email address with the Liquid drop {{ provider.finance_support_email }} .
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/email-notifications
Chapter 6. Configuring and managing Apicurio Registry deployments
Chapter 6. Configuring and managing Apicurio Registry deployments This chapter explains how to configure and manage optional settings for your Apicurio Registry deployment on OpenShift: Section 6.1, "Configuring Apicurio Registry health checks on OpenShift" Section 6.2, "Environment variables for Apicurio Registry health checks" Section 6.3, "Managing Apicurio Registry environment variables" Section 6.4, "Configuring Apicurio Registry deployment using PodTemplate" Section 6.5, "Configuring the Apicurio Registry web console" Section 6.6, "Configuring Apicurio Registry logging" Section 6.7, "Configuring Apicurio Registry event sourcing" 6.1. Configuring Apicurio Registry health checks on OpenShift You can configure optional environment variables for liveness and readiness probes to monitor the health of the Apicurio Registry server on OpenShift: Liveness probes test if the application can make progress. If the application cannot make progress, OpenShift automatically restarts the failing Pod. Readiness probes test if the application is ready to process requests. If the application is not ready, it can become overwhelmed by requests, and OpenShift stops sending requests for the time that the probe fails. If other Pods are OK, they continue to receive requests. Important The default values of the liveness and readiness environment variables are designed for most cases and should only be changed if required by your environment. Any changes to the defaults depend on your hardware, network, and amount of data stored. These values should be kept as low as possible to avoid unnecessary overhead. Prerequisites You must have an OpenShift cluster with cluster administrator access. You must have already installed Apicurio Registry on OpenShift. You must have already installed and configured your chosen Apicurio Registry storage in AMQ Streams or PostgreSQL. Procedure In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges. Click Installed Operators > Red Hat Integration - Service Registry Operator . On the ApicurioRegistry tab, click the Operator custom resource for your deployment, for example, example-apicurioregistry . In the main overview page, find the Deployment Name section and the corresponding DeploymentConfig name for your Apicurio Registry deployment, for example, example-apicurioregistry . In the left navigation menu, click Workloads > Deployment Configs , and select your DeploymentConfig name. Click the Environment tab, and enter your environment variables in the Single values env section, for example: NAME : LIVENESS_STATUS_RESET VALUE : 350 Click Save at the bottom. Alternatively, you can perform these steps using the OpenShift oc command. For more details, see the OpenShift CLI documentation . Additional resources Section 6.2, "Environment variables for Apicurio Registry health checks" OpenShift documentation on monitoring application health 6.2. Environment variables for Apicurio Registry health checks This section describes the available environment variables for Apicurio Registry health checks on OpenShift. These include liveness and readiness probes to monitor the health of the Apicurio Registry server on OpenShift. For an example procedure, see Section 6.1, "Configuring Apicurio Registry health checks on OpenShift" . Important The following environment variables are provided for reference only. The default values are designed for most cases and should only be changed if required by your environment. Any changes to the defaults depend on your hardware, network, and amount of data stored. These values should be kept as low as possible to avoid unnecessary overhead. Liveness environment variables Table 6.1. Environment variables for Apicurio Registry liveness probes Name Description Type Default LIVENESS_ERROR_THRESHOLD Number of liveness issues or errors that can occur before the liveness probe fails. Integer 1 LIVENESS_COUNTER_RESET Period in which the threshold number of errors must occur. For example, if this value is 60 and the threshold is 1, the check fails after two errors occur in 1 minute Seconds 60 LIVENESS_STATUS_RESET Number of seconds that must elapse without any more errors for the liveness probe to reset to OK status. Seconds 300 LIVENESS_ERRORS_IGNORED Comma-separated list of ignored liveness exceptions. String io.grpc.StatusRuntimeException,org.apache.kafka.streams.errors.InvalidStateStoreException Note Because OpenShift automatically restarts a Pod that fails a liveness check, the liveness settings, unlike readiness settings, do not directly affect behavior of Apicurio Registry on OpenShift. Readiness environment variables Table 6.2. Environment variables for Apicurio Registry readiness probes Name Description Type Default READINESS_ERROR_THRESHOLD Number of readiness issues or errors that can occur before the readiness probe fails. Integer 1 READINESS_COUNTER_RESET Period in which the threshold number of errors must occur. For example, if this value is 60 and the threshold is 1, the check fails after two errors occur in 1 minute. Seconds 60 READINESS_STATUS_RESET Number of seconds that must elapse without any more errors for the liveness probe to reset to OK status. In this case, this means how long the Pod stays not ready, until it returns to normal operation. Seconds 300 READINESS_TIMEOUT Readiness tracks the timeout of two operations: How long it takes for storage requests to complete How long it takes for HTTP REST API requests to return a response If these operations take more time than the configured timeout, this is counted as a readiness issue or error. This value controls the timeouts for both operations. Seconds 5 Additional resources Section 6.1, "Configuring Apicurio Registry health checks on OpenShift" OpenShift documentation on monitoring application health 6.3. Managing Apicurio Registry environment variables Apicurio Registry Operator manages most common Apicurio Registry configuration, but there are some options that it does not support yet. If a high-level configuration option is not available in the ApicurioRegistry CR, you can use an environment variable to adjust it. You can update these by setting an environment variable directly in the ApicurioRegistry CR, in the spec.configuration.env field. These are then forwarded to the Deployment resource of Apicurio Registry. Procedure You can manage Apicurio Registry environment variables by using the OpenShift web console or CLI. OpenShift web console Select the Installed Operators tab, and then Red Hat Integration - Service Registry Operator . On the Apicurio Registry tab, click the ApicurioRegistry CR for your Apicurio Registry deployment. Click the YAML tab and then edit the spec.configuration.env section as needed. The following example shows how to set default global content rules: apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ... env: - name: REGISTRY_RULES_GLOBAL_VALIDITY value: FULL # One of: NONE, SYNTAX_ONLY, FULL - name: REGISTRY_RULES_GLOBAL_COMPATIBILITY value: FULL # One of: NONE, BACKWARD, BACKWARD_TRANSITIVE, FORWARD, FORWARD_TRANSITIVE, FULL, FULL_TRANSITIVE OpenShift CLI Select the project where Apicurio Registry is installed. Run oc get apicurioregistry to get the list of ApicurioRegistry CRs Run oc edit apicurioregistry on the CR representing the Apicurio Registry instance that you want to configure. Add or modify the environment variable in the spec.configuration.env section. The Apicurio Registry Operator might attempt to set an environment variable that is already explicitly specified in the spec.configuration.env field. If an environment variable configuration has a conflicting value, the value set by Apicurio Registry Operator takes precedence. You can avoid this conflict by either using the high-level configuration for the feature, or only using the explicitly specified environment variables. The following is an example of a conflicting configuration: apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ... ui: readOnly: true env: - name: REGISTRY_UI_FEATURES_READONLY value: false This configuration results in the Apicurio Registry web console being in read-only mode. 6.4. Configuring Apicurio Registry deployment using PodTemplate Important This is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . The ApicurioRegistry CRD contains the spec.deployment.podTemplateSpecPreview field, which has the same structure as the field spec.template in a Kubernetes Deployment resource (the PodTemplateSpec struct). With some restrictions, the Apicurio Registry Operator forwards the data from this field to the corresponding field in the Apicurio Registry deployment. This provides greater configuration flexibility, without the need for the Apicurio Registry Operator to natively support each use case. The following table contains a list of subfields that are not accepted by the Apicurio Registry Operator, and result in a configuration error: Table 6.3. Restrictions on the podTemplateSpecPreview subfields podTemplateSpecPreview subfield Status Details metadata.annotations alternative exists spec.deployment.metadata.annotations metadata.labels alternative exists spec.deployment.metadata.labels spec.affinity alternative exists spec.deployment.affinity spec.containers[*] warning To configure the Apicurio Registry container, name: registry must be used spec.containers[name = "registry"].env alternative exists spec.configuration.env spec.containers[name = "registry"].image reserved - spec.imagePullSecrets alternative exists spec.deployment.imagePullSecrets spec.tolerations alternative exists spec.deployment.tolerations Warning If you set a field in podTemplateSpecPreview , its value must be valid, as if you configured it in the Apicurio Registry Deployment directly. The Apicurio Registry Operator might still modify the values you provided, but it will not fix an invalid value or make sure a default value is present. Additional resources Kubernetes documentation on Pod templates 6.5. Configuring the Apicurio Registry web console You can set optional environment variables to configure the Apicurio Registry web console specifically for your deployment environment or to customize its behavior. Prerequisites You have already installed Apicurio Registry. Configuring the web console deployment environment When you access the Apicurio Registry web console in your browser, some initial configuration settings are loaded. The following configuration settings are important: URL for core Apicurio Registry server REST API URL for Apicurio Registry web console client Typically, Apicurio Registry automatically detects and generates these settings, but there are some deployment environments where this automatic detection can fail. If this happens, you can configure environment variables to explicitly set these URLs for your environment. Procedure Configure the following environment variables to override the default URLs: REGISTRY_UI_CONFIG_APIURL : Specifies the URL for the core Apicurio Registry server REST API. For example, https://registry.my-domain.com/apis/registry REGISTRY_UI_CONFIG_UIURL : Specifies the URL for the Apicurio Registry web console client. For example, https://registry.my-domain.com/ui Configuring the web console in read-only mode You can configure the Apicurio Registry web console in read-only mode as an optional feature. This mode disables all features in the Apicurio Registry web console that allow users to make changes to registered artifacts. For example, this includes the following: Creating an artifact Uploading a new artifact version Updating artifact metadata Deleting an artifact Procedure Configure the following environment variable: REGISTRY_UI_FEATURES_READONLY : Set to true to enable read-only mode. Defaults to false . 6.6. Configuring Apicurio Registry logging You can set Apicurio Registry logging configuration at runtime. Apicurio Registry provides a REST endpoint to set the log level for specific loggers for finer grained logging. This section explains how to view and set Apicurio Registry log levels at runtime using the Apicurio Registry /admin REST API. Prerequisites Get the URL to access your Apicurio Registry instance, or get your Apicurio Registry route if you have Apicurio Registry deployed on OpenShift. This simple example uses a URL of localhost:8080 . Procedure Use this curl command to obtain the current log level for the logger io.apicurio.registry.storage : USD curl -i localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage HTTP/1.1 200 OK [...] Content-Type: application/json {"name":"io.apicurio.registry.storage","level":"INFO"} Use this curl command to change the log level for the logger io.apicurio.registry.storage to DEBUG : USD curl -X PUT -i -H "Content-Type: application/json" --data '{"level":"DEBUG"}' localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage HTTP/1.1 200 OK [...] Content-Type: application/json {"name":"io.apicurio.registry.storage","level":"DEBUG"} Use this curl command to revert the log level for the logger io.apicurio.registry.storage to its default value: USD curl -X DELETE -i localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage HTTP/1.1 200 OK [...] Content-Type: application/json {"name":"io.apicurio.registry.storage","level":"INFO"} 6.7. Configuring Apicurio Registry event sourcing Important This is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . You can configure Apicurio Registry to send events when changes are made to registry content. For example, Apicurio Registry can trigger events when schema or API artifacts, groups, or content rules are created, updated, deleted, and so on. You can configure Apicurio Registry to send events to your applications and to third-party integrations for these kind of changes. There are different protocols available for transporting events. The currently implemented protocols are HTTP and Apache Kafka. However, regardless of the protocol, the events are sent by using the CNCF CloudEvents specification. You can configure Apicurio Registry event sourcing by using Java system properties or the equivalent environment variables. Apicurio Registry event types All of the event types are defined in io.apicurio.registry.events.dto.RegistryEventType . For example, these include the following event types: io.apicurio.registry.artifact-created io.apicurio.registry.artifact-updated io.apicurio.registry.artifact-state-changed io.apicurio.registry.artifact-rule-created io.apicurio.registry.global-rule-created io.apicurio.registry.group-created Prerequisites You must have an application that you want to send Apicurio Registry cloud events to. For example, this can be a custom application or a third-party application. Configuring Apicurio Registry event sourcing by using HTTP The example in this section shows a custom application running on http://my-app-host:8888/events . Procedure When using the HTTP protocol, set your Apicurio Registry configuration to send events to a your application as follows: registry.events.sink.my-custom-consumer=http://my-app-host:8888/events If required, you can configure multiple event consumers as follows: registry.events.sink.my-custom-consumer=http://my-app-host:8888/events registry.events.sink.other-consumer=http://my-consumer.com/events Configuring Apicurio Registry event sourcing by using Apache Kafka The example in this section shows a Kafka topic named my-registry-events running on my-kafka-host:9092 . Procedure When using the Kafka protocol, set your Kafka topic as follows: registry.events.kafka.topic=my-registry-events You can set the configuration for the Kafka producer by using the KAFKA_BOOTSTRAP_SERVERS environment variable: KAFKA_BOOTSTRAP_SERVERS=my-kafka-host:9092 Alternatively, you can set the properties for the kafka producer by using the registry.events.kafka.config prefix, for example: registry.events.kafka.config.bootstrap.servers=my-kafka-host:9092 If required, you can also set the Kafka topic partition to use to produce events: registry.events.kafka.topic-partition=1 Additional resources For more details, see the CNCF CloudEvents specification .
[ "apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # env: - name: REGISTRY_RULES_GLOBAL_VALIDITY value: FULL # One of: NONE, SYNTAX_ONLY, FULL - name: REGISTRY_RULES_GLOBAL_COMPATIBILITY value: FULL # One of: NONE, BACKWARD, BACKWARD_TRANSITIVE, FORWARD, FORWARD_TRANSITIVE, FULL, FULL_TRANSITIVE", "apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ui: readOnly: true env: - name: REGISTRY_UI_FEATURES_READONLY value: false", "curl -i localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage HTTP/1.1 200 OK [...] Content-Type: application/json {\"name\":\"io.apicurio.registry.storage\",\"level\":\"INFO\"}", "curl -X PUT -i -H \"Content-Type: application/json\" --data '{\"level\":\"DEBUG\"}' localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage HTTP/1.1 200 OK [...] Content-Type: application/json {\"name\":\"io.apicurio.registry.storage\",\"level\":\"DEBUG\"}", "curl -X DELETE -i localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage HTTP/1.1 200 OK [...] Content-Type: application/json {\"name\":\"io.apicurio.registry.storage\",\"level\":\"INFO\"}" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/installing_and_deploying_apicurio_registry_on_openshift/managing-the-registry
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/building_and_deploying_data_grid_clusters_with_helm/making-open-source-more-inclusive_datagrid
Chapter 1. Red Hat Insights compliance service overview
Chapter 1. Red Hat Insights compliance service overview The Red Hat Insights for Red Hat Enterprise Linux compliance service enables IT security and compliance administrators to assess, monitor, and report on the security-policy compliance of RHEL systems. The compliance service provides a simple but powerful user interface, enabling the creation, configuration, and management of SCAP security policies. With the filtering and context-adding features built in, IT security administrators can easily identify and manage security compliance issues in the RHEL infrastructure. This documentation describes some of the functionality of the compliance service, to help users understand reporting, manage issues, and get the maximum value from the service. You can also create Ansible Playbooks to resolve security compliance issues and share reports with stakeholders to communicate compliance status. Additional Resources Generating Compliance Service Reports 1.1. Requirements and prerequisites The compliance service is part of Red Hat Insights for Red Hat Enterprise Linux, which is included with your Red Hat Enterprise Linux (RHEL) subscription and can be used with all versions of RHEL currently supported by Red Hat. You do not need additional Red Hat subscriptions to use Insights for Red Hat Enterprise Linux and the compliance service. 1.2. Supported configurations Red Hat supports specific versions of the SCAP Security Guide (SSG) for each minor version of Red Hat Enterprise Linux (RHEL). The rules and policies in an SSG version are only accurate for one RHEL minor version. In order to receive accurate compliance reporting, the system must have the supported SSG version installed. Red Hat Enterprise Linux minor versions ship and upgrade with the supported SSG version included. However, some organizations may decide to continue using an earlier version temporarily, prior to upgrading. If a policy includes systems using unsupported SSG versions, an unsupported warning, preceded by the number of affected systems, is visible to the policy in Security > Compliance > Reports . Note For more information about which versions of the SCAP Security Guide are supported in RHEL, refer to Insights Compliance - Supported configurations . Example of a compliance policy with a system running an unsupported version of SSG 1.2.1. Frequently asked questions about the compliance service How do I interpret the SSG package name? Packages names look like this: scap-security-guide-0.1.43-13.el7 . The SSG version in this case is 0.1.43; the release is 13 and architecture is el7. The release number can differ from the version number shown in the table; however, the version number must match as indicated below for it to be a supported configuration. What if Red Hat supports more than one SSG for my RHEL minor version? When more than one SSG version is supported for a RHELminor version, as is the case with RHEL 7.9 and RHEL 8.1, the compliance service will use the latest available version. Why is my old policy no longer supported by SSG? As RHEL minor versions get older, fewer SCAP profiles are supported. To view which SCAP profiles are supported, refer to Insights Compliance - Supported configurations . More about limitations of unsupported configurations The following conditions apply to the results for unsupported configurations: These results are a "best-guess" effort because using any SSG version other than what is supported by Red Hat can lead to inaccurate results. Important Although you can still see results for a system with an unsupported version of SSG installed, those results may be considered inaccurate for compliance reporting purposes. Results for systems using an unsupported version of SSG are not included in the overall compliance assessment for the policy. Remediations are not available for rules on systems with an unsupported version of SSG installed. 1.3. Best practices To benefit from the best user experience and receive the most accurate results in the compliance service, Red Hat recommends that you follow some best practices. Ensure that the RHEL OS system minor version is visible to the Insights client If the compliance service cannot see your RHEL OS minor version, then the supported SCAP Security Guide version cannot be validated and your reporting may not be accurate. The Insights client allows users to redact certain data, including Red Hat Enterprise Linux OS minor version, from the data payload that is uploaded to Red Hat Insights for Red Hat Enterprise Linux. This will prohibit accurate compliance service reporting. To learn more about data redaction, see the following documentation: Red Hat Insights client data redaction . Create security policies within the compliance service Creating your organization's security policies within the compliance service allows you to: Associate many systems with the policy. Use the supported SCAP Security Guide for your RHEL minor version. Edit which rules are included, based on your organization's requirements. 1.4. User Access settings in the Red Hat Hybrid Cloud Console User Access is the Red Hat implementation of role-based access control (RBAC). Your Organization Administrator uses User Access to configure what users can see and do on the Red Hat Hybrid Cloud Console (the console): Control user access by organizing roles instead of assigning permissions individually to users. Create groups that include roles and their corresponding permissions. Assign users to these groups, allowing them to inherit the permissions associated with their group's roles. 1.4.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 1.4.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Groups to see the current groups in your account. This view is limited to the Organization Administrator. 1.4.1.2. Predefined roles assigned to groups The Default access group contains many of the predefined roles. Because all users in your organization are members of the Default access group, they inherit all permissions assigned to that group. The Default admin access group includes many (but not all) predefined roles that provide update and delete permissions. The roles in this group usually include administrator in their name. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Roles to see the current roles in your account. You can see how many groups each role is assigned to. This view is limited to the Organization Administrator. 1.4.2. Access permissions The Prerequisites for each procedure list which predefined role provides the permissions you must have. As a user, you can navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > My User Access to view the roles and application permissions currently inherited by you. If you try to access Insights for Red Hat Enterprise Linux features and see a message that you do not have permission to perform this action, you must obtain additional permissions. The Organization Administrator or the User Access administrator for your organization configures those permissions. Use the Red Hat Hybrid Cloud Console Virtual Assistant to ask "Contact my Organization Administrator". The assistant sends an email to the Organization Administrator on your behalf. Additional resources For more information about user access and permissions, see User Access Configuration Guide for Role-based Access Control (RBAC) . 1.4.3. User Access roles for compliance-service users The following roles enable standard or enhanced access to compliance features in Insights for Red Hat Enterprise Linux: Compliance viewer. A compliance-service role that grants read access to any compliance resource. Compliance administrator. A compliance-service role that grants full access to any compliance resource. If a procedure requires that you be granted the Compliance administrator role or other enhanced permissions, it will be noted in the Prerequisites for that procedure.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_security_policy_compliance_of_rhel_systems/intro-compliance
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 1.0-35 Thus May 23 2019 Jiri Herrmann Version for 7.7 Beta publication Revision 1.0-34 Tue Oct 25 2018 Jiri Herrmann Version for 7.6 GA publication Revision 1.0-32 Tue Aug 14 2018 Jiri Herrmann Version for 7.6 Beta publication Revision 1.0-31 Wed Apr 4 2018 Jiri Herrmann Version for 7.5 GA publication Revision 1.0-27 Mon Jul 27 2017 Jiri Herrmann Version for 7.4 GA publication Revision 1.0-24 Mon Oct 17 2016 Jiri Herrmann Version for 7.3 GA publication Revision 1.0-22 Mon Dec 21 2015 Laura Novich Republished Guide to fix several bugs Revision 1.0-19 Thu Oct 08 2015 Jiri Herrmann Cleaned up Revision History
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/appe-virtualization_tuning_optimization_guide-revision_history
Chapter 10. HelmChartRepository [helm.openshift.io/v1beta1]
Chapter 10. HelmChartRepository [helm.openshift.io/v1beta1] Description HelmChartRepository holds cluster-wide configuration for proxied Helm chart repository Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object Observed status of the repository within the cluster.. 10.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description connectionConfig object Required configuration for connecting to the chart repo description string Optional human readable repository description, it can be used by UI for displaying purposes disabled boolean If set to true, disable the repo usage in the cluster/namespace name string Optional associated human readable repository name, it can be used by UI for displaying purposes 10.1.2. .spec.connectionConfig Description Required configuration for connecting to the chart repo Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this config map is openshift-config. tlsClientConfig object tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret is openshift-config. url string Chart repository URL 10.1.3. .spec.connectionConfig.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 10.1.4. .spec.connectionConfig.tlsClientConfig Description tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 10.1.5. .status Description Observed status of the repository within the cluster.. Type object Property Type Description conditions array conditions is a list of conditions and their statuses conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 10.1.6. .status.conditions Description conditions is a list of conditions and their statuses Type array 10.1.7. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 10.2. API endpoints The following API endpoints are available: /apis/helm.openshift.io/v1beta1/helmchartrepositories DELETE : delete collection of HelmChartRepository GET : list objects of kind HelmChartRepository POST : create a HelmChartRepository /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name} DELETE : delete a HelmChartRepository GET : read the specified HelmChartRepository PATCH : partially update the specified HelmChartRepository PUT : replace the specified HelmChartRepository /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name}/status GET : read status of the specified HelmChartRepository PATCH : partially update status of the specified HelmChartRepository PUT : replace status of the specified HelmChartRepository 10.2.1. /apis/helm.openshift.io/v1beta1/helmchartrepositories HTTP method DELETE Description delete collection of HelmChartRepository Table 10.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HelmChartRepository Table 10.2. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepositoryList schema 401 - Unauthorized Empty HTTP method POST Description create a HelmChartRepository Table 10.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.4. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.5. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 202 - Accepted HelmChartRepository schema 401 - Unauthorized Empty 10.2.2. /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name} Table 10.6. Global path parameters Parameter Type Description name string name of the HelmChartRepository HTTP method DELETE Description delete a HelmChartRepository Table 10.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HelmChartRepository Table 10.9. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HelmChartRepository Table 10.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.11. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HelmChartRepository Table 10.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.13. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.14. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 401 - Unauthorized Empty 10.2.3. /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name}/status Table 10.15. Global path parameters Parameter Type Description name string name of the HelmChartRepository HTTP method GET Description read status of the specified HelmChartRepository Table 10.16. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HelmChartRepository Table 10.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.18. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HelmChartRepository Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/config_apis/helmchartrepository-helm-openshift-io-v1beta1
Chapter 6. Adding file and object storage to an existing external OpenShift Data Foundation cluster
Chapter 6. Adding file and object storage to an existing external OpenShift Data Foundation cluster When OpenShift Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims. Persistent volume claims for block storage are provided directly from the external Red Hat Ceph Storage cluster. Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external Red Hat Ceph Storage cluster. Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external Red Hat Ceph Storage cluster. Use the following process to add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external OpenShift Data Foundation cluster that was initially deployed to provide only block storage. Prerequisites You have OpenShift Data Foundation 4.9 installed and running on the OpenShift Container Platform version 4.9 or above. Also, the OpenShift Data Foundation Cluster in external mode is in Ready state. Your external Red Hat Ceph Storage cluster is configured with one or both of the following: a Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage a Metadata Server (MDS) pool for file storage Ensure that you know the parameters used with the ceph-external-cluster-details-exporter.py script during external OpenShift Data Foundation cluster deployment. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using the following command: Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. Generate and save configuration details from the external Red Hat Ceph Storage cluster. Generate configuration details by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-endpoint Provide this parameter to provision object storage through Ceph Object Gateway for OpenShift Data Foundation. (optional parameter) --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. User permissions are updated as shown: Note Ensure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of OpenShift Data Foundation in external mode. Save the output of the script in an external-cluster-config.json file. The following example output shows the generated configuration changes in bold text. Upload the generated JSON file. Log in to the OpenShift web console. Click Workloads Secrets . Set project to openshift-storage . Click on rook-ceph-external-cluster-details . Click Actions (...) Edit Secret Click Browse and upload the external-cluster-config.json file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage OpenShift Data foundation Storage Systems tab and then click on the storage system name. On the Overview Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. If you added a Metadata Server for file storage: Click Workloads Pods and verify that csi-cephfsplugin-* pods are created new and are in the Running state. Click Storage Storage Classes and verify that the ocs-external-storagecluster-cephfs storage class is created. If you added the Ceph Object Gateway for object storage: Click Storage Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw storage class is created. To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage OpenShift Data foundation Storage Systems tab and then click on the storage system name. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy.
[ "get csv USD(oc get csv -n openshift-storage | grep ocs-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\\.features\\.ocs\\.openshift\\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py", "python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user= ocs-client-name --rgw-pool-prefix rgw-pool-prefix", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix", "caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index", "[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}} ]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_and_allocating_storage_resources/adding-file-and-object-storage-to-an-existing-external-ocs-cluster
Packaging Red Hat build of OpenJDK 11 applications in containers
Packaging Red Hat build of OpenJDK 11 applications in containers Red Hat build of OpenJDK 11 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/packaging_red_hat_build_of_openjdk_11_applications_in_containers/index
Chapter 10. Creating a Keycloak client
Chapter 10. Creating a Keycloak client Keycloak clients authenticate hub users with Red Hat Single Sign-On. When a user authenticates the request goes through the Keycloak client. When Single Sign-On validates or issues the OAuth token, the client provides the response to automation hub and the user can log in. Procedure Navigate to Operator Installed Operators . Select the Red Hat Single Sign-On Operator project. Select the Keycloak Client tab and click Create Keycloak Client . On the Keycloak Realm form, select YAML view . Replace the default YAML file with the following: kind: KeycloakClient apiVersion: keycloak.org/v1alpha1 metadata: name: automation-hub-client-secret labels: app: sso realm: ansible-automation-platform namespace: rh-sso spec: realmSelector: matchLabels: app: sso realm: ansible-automation-platform client: name: Automation Hub clientId: automation-hub secret: <client-secret> 1 clientAuthenticatorType: client-secret description: Client for automation hub attributes: user.info.response.signature.alg: RS256 request.object.signature.alg: RS256 directAccessGrantsEnabled: true publicClient: true protocol: openid-connect standardFlowEnabled: true protocolMappers: - config: access.token.claim: "true" claim.name: "family_name" id.token.claim: "true" jsonType.label: String user.attribute: lastName userinfo.token.claim: "true" consentRequired: false name: family name protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper - config: userinfo.token.claim: "true" user.attribute: email id.token.claim: "true" access.token.claim: "true" claim.name: email jsonType.label: String name: email protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: multivalued: "true" access.token.claim: "true" claim.name: "resource_access.USD{client_id}.roles" jsonType.label: String name: client roles protocol: openid-connect protocolMapper: oidc-usermodel-client-role-mapper consentRequired: false - config: userinfo.token.claim: "true" user.attribute: firstName id.token.claim: "true" access.token.claim: "true" claim.name: given_name jsonType.label: String name: given name protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: id.token.claim: "true" access.token.claim: "true" userinfo.token.claim: "true" name: full name protocol: openid-connect protocolMapper: oidc-full-name-mapper consentRequired: false - config: userinfo.token.claim: "true" user.attribute: username id.token.claim: "true" access.token.claim: "true" claim.name: preferred_username jsonType.label: String name: <username> protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: access.token.claim: "true" claim.name: "group" full.path: "true" id.token.claim: "true" userinfo.token.claim: "true" consentRequired: false name: group protocol: openid-connect protocolMapper: oidc-group-membership-mapper - config: multivalued: 'true' id.token.claim: 'true' access.token.claim: 'true' userinfo.token.claim: 'true' usermodel.clientRoleMapping.clientId: 'automation-hub' claim.name: client_roles jsonType.label: String name: client_roles protocolMapper: oidc-usermodel-client-role-mapper protocol: openid-connect - config: id.token.claim: "true" access.token.claim: "true" included.client.audience: 'automation-hub' protocol: openid-connect name: audience mapper protocolMapper: oidc-audience-mapper roles: - name: "hubadmin" description: "An administrator role for automation hub" 1 Replace this with a unique value. Click Create and wait for the process to complete. When automation hub is deployed, you must update the client with the "Valid Redirect URIs" and "Web Origins" as described in Updating the Red Hat Single Sign-On client Additionally, the client comes pre-configured with token mappers, however, if your authentication provider does not provide group data to Red Hat SSO, then the group mapping must be updated to reflect how that information is passed. This is commonly by user attribute.
[ "kind: KeycloakClient apiVersion: keycloak.org/v1alpha1 metadata: name: automation-hub-client-secret labels: app: sso realm: ansible-automation-platform namespace: rh-sso spec: realmSelector: matchLabels: app: sso realm: ansible-automation-platform client: name: Automation Hub clientId: automation-hub secret: <client-secret> 1 clientAuthenticatorType: client-secret description: Client for automation hub attributes: user.info.response.signature.alg: RS256 request.object.signature.alg: RS256 directAccessGrantsEnabled: true publicClient: true protocol: openid-connect standardFlowEnabled: true protocolMappers: - config: access.token.claim: \"true\" claim.name: \"family_name\" id.token.claim: \"true\" jsonType.label: String user.attribute: lastName userinfo.token.claim: \"true\" consentRequired: false name: family name protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper - config: userinfo.token.claim: \"true\" user.attribute: email id.token.claim: \"true\" access.token.claim: \"true\" claim.name: email jsonType.label: String name: email protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: multivalued: \"true\" access.token.claim: \"true\" claim.name: \"resource_access.USD{client_id}.roles\" jsonType.label: String name: client roles protocol: openid-connect protocolMapper: oidc-usermodel-client-role-mapper consentRequired: false - config: userinfo.token.claim: \"true\" user.attribute: firstName id.token.claim: \"true\" access.token.claim: \"true\" claim.name: given_name jsonType.label: String name: given name protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: id.token.claim: \"true\" access.token.claim: \"true\" userinfo.token.claim: \"true\" name: full name protocol: openid-connect protocolMapper: oidc-full-name-mapper consentRequired: false - config: userinfo.token.claim: \"true\" user.attribute: username id.token.claim: \"true\" access.token.claim: \"true\" claim.name: preferred_username jsonType.label: String name: <username> protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: access.token.claim: \"true\" claim.name: \"group\" full.path: \"true\" id.token.claim: \"true\" userinfo.token.claim: \"true\" consentRequired: false name: group protocol: openid-connect protocolMapper: oidc-group-membership-mapper - config: multivalued: 'true' id.token.claim: 'true' access.token.claim: 'true' userinfo.token.claim: 'true' usermodel.clientRoleMapping.clientId: 'automation-hub' claim.name: client_roles jsonType.label: String name: client_roles protocolMapper: oidc-usermodel-client-role-mapper protocol: openid-connect - config: id.token.claim: \"true\" access.token.claim: \"true\" included.client.audience: 'automation-hub' protocol: openid-connect name: audience mapper protocolMapper: oidc-audience-mapper roles: - name: \"hubadmin\" description: \"An administrator role for automation hub\"" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/proc-create-keycloak-client_using-a-rhsso-operator
Chapter 2. Configuration
Chapter 2. Configuration 2.1. The Beast and CivetWeb front end web servers The Ceph Object Gateway provides Beast and CivetWeb as front ends, both are C/C++ embedded web servers. Beast Starting with Red Hat Ceph Storage 4, Beast is the default front-end web server. When upgrading from Red Hat Ceph Storage 3, the rgw_frontends parameter automatically changes to Beast. Beast uses the Boost.Beast C++ library to parse HTTP, and Boost.Asio to do asynchronous network I/O. CivetWeb In Red Hat Ceph Storage 3, CivetWeb is the default front end, but Beast can also be used by setting the rgw_frontends option accordingly. CivetWeb is an HTTP library, which is a fork of the Mongoose project. Additional Resources Boost C++ Libraries CivetWeb on GitHub 2.2. Using the Beast front end The Ceph Object Gateway provides CivetWeb and Beast embedded HTTP servers as front ends. The Beast front end uses the Boost.Beast library for HTTP parsing and the Boost.Asio library for asynchronous network I/O. In Red Hat Ceph Storage version 3.x, CivetWeb was the default front end, and to use the Beast front end it needed to be specified with rgw_frontends in the Red Hat Ceph Storage configuration file. As of Red Hat Ceph Storage version 4.0, the Beast front end is default, and upgrading from Red Hat Ceph Storage 3.x automatically changes the rgw_frontends parameter to Beast. Additional Resources Beast configuration options 2.3. Beast configuration options The following Beast configuration options can be passed to the embedded web server in the Ceph configuration file for the RADOS Gateway. Each option has a default value. If a value is not specified, the default value is empty. Option Description Default endpoint and ssl_endpoint Sets the listening address in the form address[:port] where the address is an IPv4 address string in dotted decimal form, or an IPv6 address in hexadecimal notation surrounded by square brackets. The optional port defaults to 8080 for endpoint and 443 for ssl_endpoint . It can be specified multiple times as in endpoint=[::1] endpoint=192.168.0.100:8000 . EMPTY ssl_certificate Path to the SSL certificate file used for SSL-enabled endpoints. If the file is a PEM file containing more than one item the order is important. The file must begin with the RGW server key, then any intermediate certificate, and finally the CA certificate. EMPTY ssl_private_key Optional path to the private key file used for SSL-enabled endpoints. If one is not given the file specified by ssl_certificate is used as the private key. EMPTY tcp_nodelay Performance optimization in some environments. EMPTY request_timeout_ms Set an explicit request timeout for the Beast front end. Setting a larger request timeout can make the gateway more tolerant of slow clients (for example, clients connected over high-latency networks). 65 Example /etc/ceph/ceph.conf file with Beast options using SSL: Note By default, the Beast front end writes an access log line recording all requests processed by the server to the RADOS Gateway log file. Additional Resources See Using the Beast front end for more information. 2.4. Changing the CivetWeb port When the Ceph Object Gateway is installed using Ansible it configures CivetWeb to run on port 8080 . Ansible does this by adding a line similar to the following in the Ceph configuration file: Important If the Ceph configuration file does not include the rgw frontends = civetweb line, the Ceph Object Gateway listens on port 7480 . If it includes an rgw_frontends = civetweb line but there is no port specified, the Ceph Object Gateway listens on port 80 . Important Because Ansible configures the Ceph Object Gateway to listen on port 8080 and the supported way to install Red Hat Ceph Storage 4 is using ceph-ansible , port 8080 is considered the default port in the Red Hat Ceph Storage 4 documentation. Prerequisites A running Red Hat Ceph Storage 4.1 cluster. A Ceph Object Gateway node. Procedure On the gateway node, open the Ceph configuration file in the /etc/ceph/ directory. Find an Ceph Object Gateway (RGW) client section similar to the example: The [client.rgw.gateway-node1] heading identifies this portion of the Ceph configuration file as configuring a Ceph Storage Cluster client where the client type is a Ceph Object Gateway as identified by rgw , and the name of the node is gateway-node1 . To change the default Ansible configured port of 8080 to 80 edit the rgw frontends line: Ensure there is no whitespace between port= port-number in the rgw_frontends key/value pair. Repeat this step on any other gateway nodes you want to change the port on. Restart the Ceph Object Gateway service from each gateway node to make the new port setting take effect: Ensure the configured port is open on each gateway node's firewall: If the port is not open, add the port and reload the firewall configuration: Additional Resources Using SSL with CivetWeb Civetweb Configuration Options 2.5. Using SSL with Civetweb In Red Hat Ceph Storage 1, Civetweb SSL support for the Ceph Object Gateway relied on HAProxy and keepalived. In Red Hat Ceph Storage 2 and later releases, Civetweb can use the OpenSSL library to provide Transport Layer Security (TLS). Important Production deployments MUST use HAProxy and keepalived to terminate the SSL connection at HAProxy. Using SSL with Civetweb is recommended ONLY for small-to-medium sized test and pre-production deployments. To use SSL with Civetweb, obtain a certificate from a Certificate Authority (CA) that matches the hostname of the gateway node. Red Hat recommends obtaining a certificate from a CA that has subject alternate name fields and a wildcard for use with S3-style subdomains. Civetweb requires the key, server certificate and any other certificate authority or intermediate certificate in a single .pem file. Important A .pem file contains the secret key. Protect the .pem file from unauthorized access. To configure a port for SSL, add the port number to rgw_frontends and append an s to the port number to indicate that it is a secure port. Additionally, add ssl_certificate with a path to the .pem file. For example: 2.6. Civetweb Configuration Options The following Civetweb configuration options can be passed to the embedded web server in the Ceph configuration file for the RADOS Gateway. Each option has a default value and if a value is not specified, then the default value is empty. Option Description Default access_log_file Path to a file for access logs. Either full path, or relative to the current working directory. If absent (default), then accesses are not logged. EMPTY error_log_file Path to a file for error logs. Either full path, or relative to the current working directory. If absent (default), then errors are not logged. EMPTY num_threads Number of worker threads. Civetweb handles each incoming connection in a separate thread. Therefore, the value of this option is effectively the number of concurrent HTTP connections Civetweb can handle. 50 request_timeout_ms Timeout for network read and network write operations, in milliseconds. If a client intends to keep long-running connection, either increase this value or (better) use keep-alive messages. 30000 The following is an example of the /etc/ceph/ceph.conf file with some of these options set: Both the CivetWeb and Beast frontends write an access log line recording of all requests processed by the server to the RADOS gateway log file. 2.7. Add a Wildcard to the DNS To use Ceph with S3-style subdomains, for example bucket-name.domain-name.com , add a wildcard to the DNS record of the DNS server the ceph-radosgw daemon uses to resolve domain names. For dnsmasq , add the following address setting with a dot (.) prepended to the host name: For example: For bind , add a wildcard to the DNS record. For example: Restart the DNS server and ping the server with a subdomain to ensure that the ceph-radosgw daemon can process the subdomain requests: For example: If the DNS server is on the local machine, you may need to modify /etc/resolv.conf by adding a nameserver entry for the local machine. Finally, specify the host name or address of the DNS server in the appropriate [client.rgw.{instance}] section of the Ceph configuration file using the rgw_dns_name = {hostname} setting. For example: Note As a best practice, make changes to the Ceph configuration file at a centralized location such as an admin node or ceph-ansible and redistribute the configuration file as necessary to ensure consistency across the cluster. Finally, restart the Ceph object gateway so that DNS setting takes effect. 2.8. Adjusting Logging and Debugging Output Once you finish the setup procedure, check your logging output to ensure it meets your needs. If you encounter issues with your configuration, you can increase logging and debugging messages in the [global] section of your Ceph configuration file and restart the gateway(s) to help troubleshoot any configuration issues. For example: For RGW debug logs, add the following parameter in the [client.rgw.{instance}] section of your Ceph configuration file: You may also modify these settings at runtime. For example: The Ceph log files reside in /var/log/ceph by default. For general details on logging and debugging, see the Ceph debugging and logging configuration section of the Red Hat Ceph Storage Configuration Guide . 2.9. S3 server-side encryption The Ceph Object Gateway supports server-side encryption of uploaded objects for the S3 application programing interface (API). Server-side encryption means that the S3 client sends data over HTTP in its unencrypted form, and the Ceph Object Gateway stores that data in the Red Hat Ceph Storage cluster in encrypted form. Note Red Hat does NOT support S3 object encryption of Static Large Object (SLO) or Dynamic Large Object (DLO). Important To use encryption, client requests MUST send requests over an SSL connection. Red Hat does not support S3 encryption from a client unless the Ceph Object Gateway uses SSL. However, for testing purposes, administrators may disable SSL during testing by setting the rgw_crypt_require_ssl configuration setting to false at runtime, setting it to false in the Ceph configuration file and restarting the gateway instance, or setting it to false in the Ansible configuration files and replaying the Ansible playbooks for the Ceph Object Gateway. In a production environment, it might not be possible to send encrypted requests over SSL. In such a case, send requests using HTTP with server-side encryption. For information about how to configure HTTP with server-side encryption, see the Additional Resources section below. There are two options for the management of encryption keys: Customer-provided Keys When using customer-provided keys, the S3 client passes an encryption key along with each request to read or write encrypted data. It is the customer's responsibility to manage those keys. Customers must remember which key the Ceph Object Gateway used to encrypt each object. Ceph Object Gateway implements the customer-provided key behavior in the S3 API according to the Amazon SSE-C specification. Since the customer handles the key management and the S3 client passes keys to the Ceph Object Gateway, the Ceph Object Gateway requires no special configuration to support this encryption mode. Key Management Service When using a key management service, the secure key management service stores the keys and the Ceph Object Gateway retrieves them on demand to serve requests to encrypt or decrypt data. Ceph Object Gateway implements the key management service behavior in the S3 API according to the Amazon SSE-KMS specification. Important Currently, the only tested key management implementations are HashiCorp Vault, and OpenStack Barbican. However, OpenStack Barbican is a Technology Preview and is not supported for use in production systems. Additional Resources Amazon SSE-C Amazon SSE-KMS Configuring server-side encryption The HashiCorp Vault 2.10. Server-side encryption requests In a production environment, clients often contact the Ceph Object Gateway through a proxy. This proxy is referred to as a load balancer because it connects to multiple Ceph Object Gateways. When the client sends requests to the Ceph Object Gateway, the load balancer routes those requests to the multiple Ceph Object Gateways, thus distributing the workload. In this type of configuration, it is possible that SSL terminations occur both at a load balancer and between the load balancer and the multiple Ceph Object Gateways. Communication occurs using HTTP only. To set up the Ceph Object Gateways to accept the server-side encryption requests, see Configuring server-side encryption . 2.11. Configuring server-side encryption As a storage administrator, you can set up server-side encryption to send requests to the Ceph Object Gateway using HTTP, in cases where it might not be possible to send encrypted requests over SSL. This procedure uses HAProxy as proxy and load balancer. Prerequisites Root-level access to all nodes in the storage cluster. A running Red Hat Ceph Storage cluster. Ceph Object Gateway is installed. HAProxy is installed. Procedure Edit the haproxy.cfg file: Example Comment out the lines that allow access to the http front end and add instructions to direct HAProxy to use the https front end instead: Example On all nodes in the cluster, add the following parameter to the [global] section of the Ceph configuration file: Enable and start HAProxy: To ensure that rgw_trust_forwarded_https=true is not removed from the Ceph configuration file when Ansible is run, edit the ceph-ansible all.yml file and set rgw_trust_forwarded_https in the ceph_conf_overrides / global section to true . When you have finished making changes, run the ceph-ansible playbook to update the configuration on all Ceph nodes. Additional Resources Installing and configuring HAProxy Installing the Ceph client role Installing a Red Hat storage cluster 2.12. The HashiCorp Vault As a storage administrator, you can securely store keys, passwords and certificates in the HashiCorp Vault for use with the Ceph Object Gateway. The HashiCorp Vault provides a secure key management service for server-side encryption used by the Ceph Object Gateway. The basic workflow: The client requests the creation of a secret key from the Vault based on an object's key ID. The client uploads an object with the object's key ID to the Ceph Object Gateway. The Ceph Object Gateway then requests the newly created secret key from the Vault. The Vault replies to the request by returning the secret key to the Ceph Object Gateway. Now the Ceph Object Gateway can encrypt the object using the new secret key. After encryption is done the object is then stored on the Ceph OSD. Important Red Hat works with our technology partners to provide this documentation as a service to our customers. However, Red Hat does not provide support for this product. If you need technical assistance for this product, then contact Hashicorp for support. 2.12.1. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. Installation of the HashiCorp Vault software. 2.12.2. Secret engines for Vault The HashiCorp Vault provides several secret engines to generate, store, or encrypt data. The application programming interface (API) send data calls to the secret engine asking for action on that data, and the secret engine returns a result of that action request. The Ceph Object Gateway supports two of the HashiCorp Vault secret engines: Key/Value version 2 Transit Key/Value version 2 The Key/Value secret engine stores random secrets within the Vault, on disk. With version 2 of the kv engine, a key can have a configurable number of versions. The default number of versions is 10. Deleting a version does not delete the underlying data, but marks the data as deleted, allowing deleted versions to be undeleted. The key names must be strings, and the engine will convert non-string values into strings when using the command-line interface. To preserve non-string values, provide a JSON file or use the HTTP application programming interface (API). Note For access control list (ACL) policies, the Key/Value secret engine recognizes the distinctions between the create and update capabilities. Transit The Transit secret engine performs cryptographic functions on in-transit data. The Transit secret engine can generate hashes, can be a source of random bytes, and can also sign and verify data. The Vault does not store data when using the Transit secret engine. The Transit secret engine supports key derivation, by allowing the same key to be used for multiple purposes. Also, the transit secret engine supports key versioning. The Transit secret engine supports these key types: aes128-gcm96 AES-GCM with a 128-bit AES key and a 96-bit nonce; supports encryption, decryption, key derivation, and convergent encryption aes256-gcm96 AES-GCM with a 256-bit AES key and a 96-bit nonce; supports encryption, decryption, key derivation, and convergent encryption (default) chacha20-poly1305 ChaCha20-Poly1305 with a 256-bit key; supports encryption, decryption, key derivation, and convergent encryption ed25519 Ed25519; supports signing, signature verification, and key derivation ecdsa-p256 ECDSA using curve P-256; supports signing and signature verification ecdsa-p384 ECDSA using curve P-384; supports signing and signature verification ecdsa-p521 ECDSA using curve P-521; supports signing and signature verification rsa-2048 2048-bit RSA key; supports encryption, decryption, signing, and signature verification rsa-3072 3072-bit RSA key; supports encryption, decryption, signing, and signature verification rsa-4096 4096-bit RSA key; supports encryption, decryption, signing, and signature verification Additional Resources See the KV Secrets Engine documentation on Vault's project site for more information. See the Transit Secrets Engine documentation on Vault's project site for more information. 2.12.3. Authentication for Vault The HashiCorp Vault supports several types of authentication mechanisms. The Ceph Object Gateway currently supports the Vault agent and the token authentication method. The Ceph Object Gateway uses the rgw_crypt_vault_auth , and rgw_crypt_vault_addr options to configure the use of the HashiCorp Vault. Token The token authentication method allows users to authenticate using a token. You can create new tokens, revoke secrets by token, and many other token operations. You can bypass other authentication methods, by using the token store. When using the token authentication method, the rgw_crypt_vault_token_file option must also be used. The token file can only be readable by the Ceph Object Gateway. Also, a Vault token with a restricted policy that allows fetching of keyrings from a specific path must be used. Warning Red Hat recommends not using token authentication for production environments. Vault Agent The Vault agent is a daemon that runs on a client node and provides client-side caching, along with token renewal. The Vault agent typically runs on the Ceph Object Gateway node. Additional Resources See the Token Auth Method documentation on Vault's project site for more information. See the Vault Agent documentation on Vault's project site for more information. 2.12.4. Namespaces for Vault Using HashiCorp Vault as an enterprise service provides centralized management for isolated namespaces that teams within an organization can use. These isolated namespace environments are known as tenants , and teams within an organization can utilize these tenants to isolate their policies, secrets, and identities from other teams. The namespace features of Vault help support secure multi-tenancy from within a single infrastructure. Additional Resources See the Vault Enterprise Namespaces documentation on Vault's project site for more information. 2.12.5. Configuring the Ceph Object Gateway to use the Vault To configure the Ceph Object Gateway to use the HashiCorp Vault it must be set as the encryption key store. Currently, the Ceph Object Gateway supports two different secret engines, and two different authentication methods. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. Root-level access to a Ceph Object Gateway node. Procedure Open for editing the Ceph configuration file, by default /etc/ceph/ceph.conf , and enable the Vault as the encryption key store: Under the [client.radosgw. INSTANCE_NAME ] section, choose a Vault authentication method, either Token or the Vault agent. If using Token , then add the following lines: If using the Vault agent , then add the following lines: Under the [client.radosgw. INSTANCE_NAME ] section, choose a Vault secret engine, either Key/Value or Transit. If using Key/Value , then add the following line: If using Transit , then add the following line: Optionally, Under the [client.radosgw. INSTANCE_NAME ] section, you can set the Vault namespace where the encryption keys will be retrieved: Restrict where the Ceph Object Gateway retrieves the encryption keys from the Vault by setting a path prefix: Example For exportable Transit keys, set the prefix path as follows: Assuming the domain name of the Vault server is vault-server , the Ceph Object Gateway will fetch encrypted transit keys from the following URL: Example Save the changes to the Ceph configuration file. Additional Resources See the Secret engines for Vault section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details. See the Authentication for Vault section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details. 2.12.6. Creating a key using the kv engine Configure the HashiCorp Vault Key/Value secret engine ( kv ) so you can create a key for use with the Ceph Object Gateway. Secrets are stored as key-value pairs in the kv secret engine. Important Keys for server-side encyption must be 256-bits long and encoded using base64 . Prerequisites A running Red Hat Ceph Storage cluster. Installation of the HashiCorp Vault software. Root-level access to the HashiCorp Vault node. Procedure Enable the Key/Value version 2 secret engine: Create a new key: Syntax Example 2.12.7. Creating a key using the transit engine Configure the HashiCorp Vault Transit secret engine ( transit ) so you can create a key for use with the Ceph Object Gateway. Creating keys with the Transit secret engine must be exportable in order to be used for server-side encryption with the Ceph Object Gateway. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the HashiCorp Vault software. Root-level access to the HashiCorp Vault node. Procedure Enable the Transit secret engine: Create a new exportable key: Syntax Example Note By default the above command creates a aes256-gcm96 type key. Verify the creation of the key: Syntax Example Note Providing the full key path, including the key version is required. 2.12.8. Uploading an object using AWS and the Vault When uploading an object to the Ceph Object Gateway, the Ceph Object Gateway will fetch the key from the Vault, and then encrypt and store the object in a bucket. When a request is made to download the object, the Ceph Object Gateway will automatically retrieve the corresponding key from the Vault and decrypt the object. Note The URL is constructed using the base address, set by the rgw_crypt_vault_addr option, and the path prefix, set by the rgw_crypt_vault_prefix option. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. Installation of the HashiCorp Vault software. Access to a Ceph Object Gateway client node. Access to Amazon Web Services (AWS). Procedure Upload an object using the AWS command-line client: Example Note The key fetching URL used in the example is: http://vault-server:8200/v1/secret/data/myproject/mybucketkey 2.12.9. Additional Resources See the Install Vault documentation on Vault's project site for more information. 2.13. Testing the Gateway To use the REST interfaces, first create an initial Ceph Object Gateway user for the S3 interface. Then, create a subuser for the Swift interface. You then need to verify if the created users are able to access the gateway. 2.13.1. Create an S3 User To test the gateway, create an S3 user and grant the user access. The man radosgw-admin command provides information on additional command options. Note In a multi-site deployment, always create a user on a host in the master zone of the master zone group. Prerequisites root or sudo access Ceph Object Gateway installed Procedure Create an S3 user: Replace name with the name of the S3 user, for example: Verify the output to ensure that the values of access_key and secret_key do not include a JSON escape character ( \ ). These values are needed for access validation, but certain clients cannot handle if the values include JSON escape character. To fix this problem, perform one of the following actions: Remove the JSON escape character. Encapsulate the string in quotes. Regenerate the key and ensure that is does not include a JSON escape character. Specify the key and secret manually. Do not remove the forward slash / because it is a valid character. 2.13.2. Create a Swift user To test the Swift interface, create a Swift subuser. Creating a Swift user is a two step process. The first step is to create the user. The second step is to create the secret key. Note In a multi-site deployment, always create a user on a host in the master zone of the master zone group. Prerequisites Installation of the Ceph Object Gateway. Root-level access to the Ceph Object Gateway node. Procedure Create the Swift user: Syntax Replace NAME with the Swift user name, for example: Example Create the secret key: Syntax Replace NAME with the Swift user name, for example: Example 2.13.3. Test S3 Access You need to write and run a Python test script for verifying S3 access. The S3 access test script will connect to the radosgw , create a new bucket and list all buckets. The values for aws_access_key_id and aws_secret_access_key are taken from the values of access_key and secret_key returned by the radosgw_admin command. Note System users must have root privileges over the entire zone, as the output would contain additional json fields for maintaining metadata. Prerequisites root or sudo access. Ceph Object Gateway installed. S3 user created. Procedure Enable the common repository for Red Hat Enterprise Linux 7 and the High Availability repository for Red Hat Enterprise Linux 8: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Install the python-boto package. Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Create the Python script: Add the following contents to the file: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Replace ZONE with the zone name of the host where you have configured the gateway service. That is, the gateway host . Ensure that the host`setting resolves with DNS. Replace ` PORT with the port number of the gateway. Replace ACCESS and SECRET with the access_key and secret_key values from the Create an S3 User section in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Run the script: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Example output: 2.13.4. Test Swift Access Swift access can be verified via the swift command line client. The command man swift will provide more information on available command line options. To install swift client, execute the following: To test swift access, execute the following: Replace {IP ADDRESS} with the public IP address of the gateway server and {swift_secret_key} with its value from the output of radosgw-admin key create command executed for the swift user. Replace {port} with the port number you are using with Civetweb (e.g., 8080 is the default). If you don't replace the port, it will default to port 80 . For example: The output should be: 2.14. Configuring HAProxy/keepalived The Ceph Object Gateway allows you to assign many instances of the object gateway to a single zone so that you can scale out as load increases, that is, the same zone group and zone; however, you do not need a federated architecture to use HAProxy/ keepalived . Since each Ceph Object Gateway instance has its own IP address, you can use HAProxy and keepalived to balance the load across Ceph Object Gateway servers. Another use case for HAProxy and keepalived is to terminate HTTPS at the HAProxy server. You can use an HAProxy server to terminate HTTPS at the HAProxy server and use HTTP between the HAProxy server and the Civetweb gateway instances. Note This section describes configuration of HAProxy and keepalived for Red Hat Enterprise Linux 7. For Red Hat Enterprise Linux 8, install keepalived and haproxy packages to install the Load Balancer. See the Do we need any additional subscription for Load Balancing on Red Hat Enterprise Linux 8? Knowledgebase article for details. 2.14.1. HAProxy/keepalived Prerequisites To set up an HA Proxy with the Ceph Object Gateway, you must have: A running Ceph cluster At least two Ceph Object Gateway servers within the same zone configured to run on port 80 . If you follow the simple installation procedure, the gateway instances are in the same zone group and zone by default. If you are using a federated architecture, ensure that the instances are in the same zone group and zone; and, At least two servers for HAProxy and keepalived . Note This section assumes that you have at least two Ceph Object Gateway servers running, and that you get a valid response from each of them when running test scripts over port 80 . For a detailed discussion of HAProxy and keepalived , see Load Balancer Administration . 2.14.2. Preparing HAProxy Nodes The following setup assumes two HAProxy nodes named haproxy and haproxy2 and two Ceph Object Gateway servers named rgw1 and rgw2 . You may use any naming convention you prefer. Perform the following procedure on your at least two HAProxy nodes: Install Red Hat Enterprise Linux 7. Register the nodes. Enable the RHEL server repository. Update the server. Install admin tools (e.g., wget , vim , etc.) as needed. Open port 80 . For HTTPS, open port 443 . Connect to the required port. 2.14.3. Installing and Configuring keepalived Perform the following procedure on your at least two HAProxy nodes: Prerequisites A minimum of two HAProxy nodes. A minimum of two Object Gateway nodes. Procedure Install keepalived : Configure keepalived on both HAProxy nodes: In the configuration file, there is a script to check the haproxy processes: , the instance on the master and backup load balancers uses eno1 as the network interface. It also assigns a virtual IP address, that is, 192.168.1.20 . Master load balancer node Backup load balancer node Enable and start the keepalived service: Additional Resources For a detailed discussion of configuring keepalived , refer to Initial Load Balancer Configuration with Keepalived . 2.14.4. Installing and Configuring HAProxy Perform the following procedure on your at least two HAProxy nodes: Install haproxy . Configure haproxy for SELinux and HTTP. Add the following lines: As root , assign the correct SELinux context and file permissions to the haproxy-http.xml file. If you intend to use HTTPS, configure haproxy for SELinux and HTTPS. Add the following lines: As root , assign the correct SELinux context and file permissions to the haproxy-https.xml file. If you intend to use HTTPS, generate keys for SSL. If you do not have a certificate, you may use a self-signed certificate. To generate a key, see to Generating a New Key and Certificate section in the System Administrator's Guide for Red Hat Enterprise Linux 7. Finally, put the certificate and key into a PEM file. Configure haproxy . The global and defaults may remain unchanged. After the defaults section, you will need to configure frontend and backend sections. For example: For a detailed discussion of HAProxy configuration, refer to HAProxy Configuration . Enable/start haproxy 2.14.5. Testing the HAProxy Configuration On your HAProxy nodes, check to ensure the virtual IP address from your keepalived configuration appears. On your calamari node, see if you can reach the gateway nodes via the load balancer configuration. For example: This should return the same result as: If it returns an index.html file with the following contents: Then, your configuration is working properly. 2.15. Configuring Gateways for Static Web Hosting Traditional web hosting involves setting up a web server for each website, which can use resources inefficiently when content does not change dynamically. Ceph Object Gateway can host static web sites in S3 buckets- that is, sites that do not use server-side services like PHP, servlets, databases, nodejs and the like. This approach is substantially more economical than setting up VMs with web servers for each site. 2.15.1. Static Web Hosting Assumptions Static web hosting requires at least one running Ceph Storage Cluster, and at least two Ceph Object Gateway instances for static web sites. Red Hat assumes that each zone will have multiple gateway instances load balanced by HAProxy/keepalived. See Configuring HAProxy/keepalived for additional details on HAProxy/keepalived. Note Red Hat DOES NOT support using a Ceph Object Gateway instance to deploy both standard S3/Swift APIs and static web hosting simultaneously. 2.15.2. Static Web Hosting Requirements Static web hosting functionality uses its own API, so configuring a gateway to use static web sites in S3 buckets requires the following: S3 static web hosting uses Ceph Object Gateway instances that are separate and distinct from instances used for standard S3/Swift API use cases. Gateway instances hosting S3 static web sites should have separate, non-overlapping domain names from the standard S3/Swift API gateway instances. Gateway instances hosting S3 static web sites should use separate public-facing IP addresses from the standard S3/Swift API gateway instances. Gateway instances hosting S3 static web sites load balance, and if necessary terminate SSL, using HAProxy/keepalived. 2.15.3. Static Web Hosting Gateway Setup To enable a gateway for static web hosting, edit the Ceph configuration file and add the following settings: The rgw_enable_static_website setting MUST be true . The rgw_enable_apis setting MUST enable the s3website API. The rgw_dns_name and rgw_dns_s3website_name settings must provide their fully qualified domains. If the site will use canonical name extensions, set rgw_resolve_cname to true . Important The FQDNs of rgw_dns_name and rgw_dns_s3website_name MUST NOT overlap. 2.15.4. Static Web Hosting DNS Configuration The following is an example of assumed DNS settings, where the first two lines specify the domains of the gateway instance using a standard S3 interface and point to the IPv4 and IPv6 addresses respectively. The third line provides a wildcard CNAME setting for S3 buckets using canonical name extensions. The fourth and fifth lines specify the domains for the gateway instance using the S3 website interface and point to their IPv4 and IPv6 addresses respectively. Note The IP addresses in the first two lines differ from the IP addresses in the fourth and fifth lines. If using Ceph Object Gateway in a multi-site configuration, consider using a routing solution to route traffic to the gateway closest to the client. The Amazon Web Service (AWS) requires static web host buckets to match the host name. Ceph provides a few different ways to configure the DNS, and HTTPS will work if the proxy has a matching certificate. Hostname to a Bucket on a Subdomain To use AWS-style S3 subdomains, use a wildcard in the DNS entry and can redirect requests to any bucket. A DNS entry might look like the following: Access the bucket name in the following manner: Where the bucket name is bucket1 . Hostname to Non-Matching Bucket Ceph supports mapping domain names to buckets without including the bucket name in the request, which is unique to Ceph Object Gateway. To use a domain name to access a bucket, map the domain name to the bucket name. A DNS entry might look like the following: Where the bucket name is bucket2 . Access the bucket in the following manner: Hostname to Long Bucket with CNAME AWS typically requires the bucket name to match the domain name. To configure the DNS for static web hosting using CNAME, the DNS entry might look like the following: Access the bucket in the following manner: Hostname to Long Bucket without CNAME If the DNS name contains other non-CNAME records such as SOA , NS , MX or TXT , the DNS record must map the domain name directly to the IP address. For example: Access the bucket in the following manner: 2.15.5. Creating a Static Web Hosting Site To create a static website perform the following steps: Create an S3 bucket. The bucket name MAY be the same as the website's domain name. For example, mysite.com may have a bucket name of mysite.com . This is required for AWS, but it is NOT required for Ceph. See DNS Settings for details. Upload the static website content to the bucket. Contents may include HTML, CSS, client-side JavaScript, images, audio/video content and other downloadable files. A website MUST have an index.html file and MAY have error.html file. Verify the website's contents. At this point, only the creator of the bucket will have access to the contents. Set permissions on the files so that they are publicly readable. 2.16. Exporting the Namespace to NFS-Ganesha In Red Hat Ceph Storage 3 and later, the Ceph Object Gateway provides the ability to export S3 object namespaces by using NFS version 3 and NFS version 4.1 for production systems. Note The NFS Ganesha feature is not for general use, but rather for migration to an S3 cloud only. Note Red Hat Ceph Storage does not support NFS-export of versioned buckets. The implementation conforms to Amazon Web Services (AWS) hierarchical namespace conventions which map UNIX-style path names onto S3 buckets and objects. The top level of the attached namespace, which is subordinate to the NFSv4 pseudo root if present, consists of the Ceph Object Gateway S3 buckets, where buckets are represented as NFS directories. Objects within a bucket are presented as NFS file and directory hierarchies, following S3 conventions. Operations to create files and directories are supported. Note Creating or deleting hard or soft links IS NOT supported. Performing rename operations on buckets or directories IS NOT supported via NFS, but rename on files IS supported within and between directories, and between a file system and an NFS mount. File rename operations are more expensive when conducted over NFS, as they change the target directory and typically forces a full readdir to refresh it. Note Editing files via the NFS mount IS NOT supported. Note The Ceph Object Gateway requires applications to write sequentially from offset 0 to the end of a file. Attempting to write out of order causes the upload operation to fail. To work around this issue, use utilities like cp , cat , or rsync when copying files into NFS space. Always mount with the sync option. The Ceph Object Gateway with NFS is based on an in-process library packaging of the Gateway server and a File System Abstraction Layer (FSAL) namespace driver for the NFS-Ganesha NFS server. At runtime, an instance of the Ceph Object Gateway daemon with NFS combines a full Ceph Object Gateway daemon, albeit without the Civetweb HTTP service, with an NFS-Ganesha instance in a single process. To make use of this feature, deploy NFS-Ganesha version 2.3.2 or later. Perform the steps in the Before you Start and Configuring an NFS-Ganesha Instance procedures on the host that will contain the NFS-Ganesha ( nfs-ganesha-rgw ) instance. Running Multiple NFS Gateways Each NFS-Ganesha instance acts as a full gateway endpoint, with the current limitation that an NFS-Ganesha instance cannot be configured to export HTTP services. As with ordinary gateway instances, any number of NFS-Ganesha instances can be started, exporting the same or different resources from the cluster. This enables the clustering of NFS-Ganesha instances. However, this does not imply high availability. When regular gateway instances and NFS-Ganesha instances overlap the same data resources, they will be accessible from both the standard S3 API and through the NFS-Ganesha instance as exported. You can co-locate the NFS-Ganesha instance with a Ceph Object Gateway instance on the same host. Before you Start Disable any running kernel NFS service instances on any host that will run NFS-Ganesha before attempting to run NFS-Ganesha. NFS-Ganesha will not start if another NFS instance is running. As root , enable the Red Hat Ceph Storage Tools repository: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Make sure that the rpcbind service is running: Note The rpcbind package that provides rpcbind is usually installed by default. If that is not the case, install the package first. For details on how NFS uses rpcbind , see the Required Services section in the Storage Administration Guide for Red Hat Enterprise Linux 7. If the nfs-service service is running, stop and disable it: Configuring an NFS-Ganesha Instance Install the nfs-ganesha-rgw package: Copy the Ceph configuration file from a Ceph Monitor node to the /etc/ceph/ directory of the NFS-Ganesha host, and edit it as necessary: Note The Ceph configuration file must contain a valid [client.rgw.{instance-name}] section and corresponding parameters for the various required Gateway configuration variables such as rgw_data , keyring , or rgw_frontends . If exporting Swift containers that do not conform to valid S3 bucket naming requirements, set rgw_relaxed_s3_bucket_names to true in the [client.rgw] section of the Ceph configuration file. For example, if a Swift container name contains underscores, it is not a valid S3 bucket name and will not get synchronized unless rgw_relaxed_s3_bucket_names is set to true . When adding objects and buckets outside of NFS, those objects will appear in the NFS namespace in the time set by rgw_nfs_namespace_expire_secs , which is about 5 minutes by default. Override the default value for rgw_nfs_namespace_expire_secs in the Ceph configuration file to change the refresh rate. Open the NFS-Ganesha configuration file: Configure the EXPORT section with an FSAL (File System Abstraction Layer) block. Provide an ID, S3 user ID, S3 access key, and secret. For NFSv4, it should look something like this: The Path option instructs Ganesha where to find the export. For the VFS FSAL, this is the location within the server's namespace. For other FSALs, it may be the location within the filesystem managed by that FSAL's namespace. For example, if the Ceph FSAL is used to export an entire CephFS volume, Path would be / . The Pseudo option instructs Ganesha where to place the export within NFS v4's pseudo file system namespace. NFS v4 specifies the server may construct a pseudo namespace that may not correspond to any actual locations of exports, and portions of that pseudo filesystem may exist only within the realm of the NFS server and not correspond to any physical directories. Further, an NFS v4 server places all its exports within a single namespace. It is possible to have a single export exported as the pseudo filesystem root, but it is much more common to have multiple exports placed in the pseudo filesystem. With a traditional VFS, often the Pseudo location is the same as the Path location. Returning to the example CephFS export with / as the Path , if multiple exports are desired, the export would likely have something else as the Pseudo option. For example, /ceph . Any EXPORT block which should support NFSv3 should include version 3 in the NFS_Protocols setting. Additionally, NFSv3 is the last major version to support the UDP transport. Early versions of the standard included UDP, but RFC 7530 forbids its use. To enable UDP, include it in the Transport_Protocols setting. For example: Setting SecType = sys; allows clients to attach without Kerberos authentication. Setting Squash = No_Root_Squash; enables a user to change directory ownership in the NFS mount. NFS clients using a conventional OS-native NFS 4.1 client typically see a federated namespace of exported file systems defined by the destination server's pseudofs root. Any number of these can be Ceph Object Gateway exports. Each export has its own tuple of name , User_Id , Access_Key , and Secret_Access_Key and creates a proxy of the object namespace visible to the specified user. An export in ganesha.conf can also contain an NFSV4 block. Red Hat Ceph Storage supports the Allow_Numeric_Owners and Only_Numberic_Owners parameters as an alternative to setting up the idmapper program. Configure an NFS_CORE_PARAM block. When the mount_path_pseudo configuration setting is set to true , it will make the NFS v3 and NFS v4.x mounts use the same server side path to reach an export, for example: When the mount_path_pseudo configuration setting is set to false , NFS v3 mounts use the Path option and NFS v4.x mounts use the Pseudo option. Configure the RGW section. Specify the name of the instance, provide a path to the Ceph configuration file, and specify any initialization arguments: Save the /etc/ganesha/ganesha.conf configuration file. Enable and start the nfs-ganesha service. For very large pseudo directories, set the configurable parameter rgw_nfs_s3_fast_attrs to true in the ceph.conf file to make the namespace immutable and accelerated: Restart the Ceph Object Gateway service from each gateway node: Configuring NFSv4 clients To access the namespace, mount the configured NFS-Ganesha export(s) into desired locations in the local POSIX namespace. As noted, this implementation has a few unique restrictions: Only the NFS 4.1 and higher protocol flavors are supported. To enforce write ordering, use the sync mount option. To mount the NFS-Ganesha exports, add the following entry to the /etc/fstab file on the client host: Specify the NFS-Ganesha host name and the path to the mount point on the client. Note To successfully mount the NFS-Ganesha exports, the /sbin/mount.nfs file must exist on the client. The nfs-tools package provides this file. In most cases, the package is installed by default. However, verify that the nfs-tools package is installed on the client and if not, install it. For additional details on NFS, see the Network File System (NFS) chapter in the Storage Administration Guide for Red Hat Enterprise Linux 7. Configuring NFSv3 clients Linux clients can be configured to mount with NFSv3 by supplying nfsvers=3 and noacl as mount options. To use UDP as the transport, add proto=udp to the mount options. However, TCP is the preferred protocol. Note Configure the NFS Ganesha EXPORT block Protocols setting with version 3 and the Transports setting with UDP if the mount will use version 3 with UDP. Since NFSv3 does not communicate client OPEN and CLOSE operations to file servers, RGW NFS cannot use these operations to mark the beginning and ending of file upload transactions. Instead, RGW NFS attempts to start a new upload when the first write is sent to a file at offset 0, and finalizes the upload when no new writes to the file have been seen for a period of time- by default, 10 seconds. To change this value, set a value for rgw_nfs_write_completion_interval_s in the RGW section(s) of the Ceph configuration file.
[ "[client.rgw.node1] rgw frontends = beast ssl_endpoint=192.168.0.100:443 ssl_certificate=<path to SSL certificate>", "rgw frontends = civetweb port=192.168.122.199:8080 num_threads=100", "[client.rgw.gateway-node1] host = gateway-node1 keyring = /var/lib/ceph/radosgw/ceph-rgw.gateway-node1/keyring log file = /var/log/ceph/ceph-rgw-gateway-node1.log rgw frontends = civetweb port=192.168.122.199:8080 num_threads=100", "rgw frontends = civetweb port=192.168.122.199:80 num_threads=100", "systemctl restart ceph-radosgw.target", "firewall-cmd --list-all", "firewall-cmd --zone=public --add-port 80/tcp --permanent firewall-cmd --reload", "[client.rgw.{hostname}] rgw_frontends = \"civetweb port=443s ssl_certificate=/etc/ceph/private/server.pem\"", "[client.rgw.node1] rgw frontends = civetweb request_timeout_ms=30000 error_log_file=/var/log/radosgw/civetweb.error.log access_log_file=/var/log/radosgw/civetweb.access.log", "address=/.{hostname-or-fqdn}/{host-ip-address}", "address=/.gateway-node1/192.168.122.75", "USDTTL 604800 @ IN SOA gateway-node1. root.gateway-node1. ( 2 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS gateway-node1. @ IN A 192.168.122.113 * IN CNAME @", "ping mybucket.{hostname}", "ping mybucket.gateway-node1", "[client.rgw.rgw1.rgw0] rgw_dns_name = {hostname}", "append the following in the global section. debug ms = 1 debug civetweb = 20", "[client.rgw.rgw1.rgw0] debug rgw = 20", "ceph tell osd.0 injectargs --debug_civetweb 10/20", "frontend http_web bind *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind *:443 ssl crt /etc/ssl/private/example.com.pem default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:8080 check server rgw2 10.0.0.80:8080 check", "frontend http_web bind *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind *:443 ssl crt /etc/ssl/private/example.com.pem http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto https here we set the incoming HTTPS port on the load balancer (eg : 443) http-request set-header X-Forwarded-Port 443 default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:8080 check server rgw2 10.0.0.80:8080 check", "rgw_trust_forwarded_https=true", "systemctl enable haproxy systemctl start haproxy", "ceph_conf_overrides: global: rgw_trust_forwarded_https: true", "rgw_crypt_s3_kms_backend = vault", "rgw_crypt_vault_auth = token rgw_crypt_vault_token_file = /etc/ceph/vault.token rgw_crypt_vault_addr = http:// VAULT_SERVER :8200", "rgw_crypt_vault_auth = agent rgw_crypt_vault_addr = http:// VAULT_SERVER :8100", "rgw_crypt_vault_secret_engine = kv", "rgw_crypt_vault_secret_engine = transit", "rgw_crypt_vault_namespace = NAME_OF_THE_NAMESPACE", "rgw_crypt_vault_prefix = /v1/secret/data", "rgw_crypt_vault_prefix = /v1/transit/export/encryption-key", "http://vault-server:8200/v1/transit/export/encryption-key", "vault secrets enable kv-v2", "vault kv put secret/ PROJECT_NAME / BUCKET_NAME key=USD(openssl rand -base64 32)", "vault kv put secret/myproject/mybucketkey key=USD(openssl rand -base64 32) ====== Metadata ====== Key Value --- ----- created_time 2020-02-21T17:01:09.095824999Z deletion_time n/a destroyed false version 1", "vault secrets enable transit", "vault write -f transit/keys/ BUCKET_NAME exportable=true", "vault write -f transit/keys/mybucketkey exportable=true", "vault read transit/export/encryption-key/ BUCKET_NAME / VERSION_NUMBER", "vault read transit/export/encryption-key/mybucketkey/1 Key Value --- ----- keys map[1:-gbTI9lNpqv/V/2lDcmH2Nq1xKn6FPDWarCmFM2aNsQ=] name mybucketkey type aes256-gcm96", "[user@client ~]USD aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id myproject/mybucketkey", "radosgw-admin user create --uid= name --display-name=\"First User\"", "radosgw-admin user create --uid=\"testuser\" --display-name=\"First User\" { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"CEP28KDIQXBKU4M15PDC\", \"secret_key\": \"MARoio8HFc8JxhEilES3dKFVj8tV3NOOYymihTLO\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }", "radosgw-admin subuser create --uid= NAME --subuser= NAME :swift --access=full", "radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"13TLtdEW7bCqgttQgPzxFxziu0AgabtOc6vM8DLA\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }", "radosgw-admin key create --subuser= NAME :swift --key-type=swift --gen-secret", "radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"a4ioT4jEP653CDcdU8p4OuhruwABBRZmyNUbnSSt\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }", "subscription-manager repos --enable=rhel-7-server-rh-common-rpms", "subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms", "yum install python-boto", "dnf install python3-boto3", "vi s3test.py", "import boto import boto.s3.connection access_key = ' ACCESS ' secret_key = ' SECRET ' boto.config.add_section('s3') conn = boto.connect_s3( aws_access_key_id = access_key, aws_secret_access_key = secret_key, host = 's3. ZONE .hostname', port = PORT , is_secure=False, calling_format = boto.s3.connection.OrdinaryCallingFormat(), ) bucket = conn.create_bucket('my-new-bucket') for bucket in conn.get_all_buckets(): print \"{name}\\t{created}\".format( name = bucket.name, created = bucket.creation_date, )", "import boto3 endpoint = \"\" # enter the endpoint URL along with the port \"http:// URL :_PORT_\" access_key = ' ACCESS ' secret_key = ' SECRET ' s3 = boto3.client( 's3', endpoint_url=endpoint, aws_access_key_id=access_key, aws_secret_access_key=secret_key ) s3.create_bucket(Bucket='my-new-bucket') response = s3.list_buckets() for bucket in response['Buckets']: print(\"{name}\\t{created}\".format( name = bucket['Name'], created = bucket['CreationDate'] ))", "python s3test.py", "python3 s3test.py", "my-new-bucket 2021-08-16T17:09:10.000Z", "sudo yum install python-setuptools sudo easy_install pip sudo pip install --upgrade setuptools sudo pip install --upgrade python-swiftclient", "swift -A http://{IP ADDRESS}:{port}/auth/1.0 -U testuser:swift -K '{swift_secret_key}' list", "swift -A http://10.19.143.116:8080/auth/1.0 -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list", "my-new-bucket", "subscription-manager register", "subscription-manager repos --enable=rhel-7-server-rpms", "yum update -y", "firewall-cmd --zone=public --add-port 80/tcp --permanent firewall-cmd --reload", "firewall-cmd --zone=public --add-port 443/tcp --permanent firewall-cmd --reload", "semanage port -m -t http_cache_port_t -p tcp 8081", "yum install -y keepalived", "vim /etc/keepalived/keepalived.conf", "vrrp_script chk_haproxy { script \"killall -0 haproxy\" # check the haproxy process interval 2 # every 2 seconds weight 2 # add 2 points if OK }", "vrrp_instance RGW { state MASTER # might not be necessary. This is on the Master LB node. @main interface eno1 priority 100 advert_int 1 interface eno1 virtual_router_id 50 @main unicast_src_ip 10.8.128.43 80 unicast_peer { 10.8.128.53 } authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.1.20 } track_script { chk_haproxy } } virtual_server 192.168.1.20 80 eno1 { #populate correct interface delay_loop 6 lb_algo wlc lb_kind dr persistence_timeout 600 protocol TCP real_server 10.8.128.43 80 { # ip address of rgw2 on physical interface, haproxy listens here, rgw listens to localhost:8080 or similar weight 100 TCP_CHECK { # perhaps change these to a HTTP/SSL GET? connect_timeout 3 } } real_server 10.8.128.53 80 { # ip address of rgw3 on physical interface, haproxy listens here, rgw listens to localhost:8080 or similar weight 100 TCP_CHECK { # perhaps change these to a HTTP/SSL GET? connect_timeout 3 } } }", "vrrp_instance RGW { state BACKUP # might not be necessary? priority 99 advert_int 1 interface eno1 virtual_router_id 50 unicast_src_ip 10.8.128.53 80 unicast_peer { 10.8.128.43 } authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.1.20 } track_script { chk_haproxy } } virtual_server 192.168.1.20 80 eno1 { #populate correct interface delay_loop 6 lb_algo wlc lb_kind dr persistence_timeout 600 protocol TCP real_server 10.8.128.43 80 { # ip address of rgw2 on physical interface, haproxy listens here, rgw listens to localhost:8080 or similar weight 100 TCP_CHECK { # perhaps change these to a HTTP/SSL GET? connect_timeout 3 } } real_server 10.8.128.53 80 { # ip address of rgw3 on physical interface, haproxy listens here, rgw listens to localhost:8080 or similar weight 100 TCP_CHECK { # perhaps change these to a HTTP/SSL GET? connect_timeout 3 } } }", "systemctl enable keepalived systemctl start keepalived", "yum install haproxy", "vim /etc/firewalld/services/haproxy-http.xml", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <service> <short>HAProxy-HTTP</short> <description>HAProxy load-balancer</description> <port protocol=\"tcp\" port=\"80\"/> </service>", "cd /etc/firewalld/services restorecon haproxy-http.xml chmod 640 haproxy-http.xml", "vim /etc/firewalld/services/haproxy-https.xml", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <service> <short>HAProxy-HTTPS</short> <description>HAProxy load-balancer</description> <port protocol=\"tcp\" port=\"443\"/> </service>", "cd /etc/firewalld/services restorecon haproxy-https.xml chmod 640 haproxy-https.xml", "cat example.com.crt example.com.key > example.com.pem cp example.com.pem /etc/ssl/private/", "vim /etc/haproxy/haproxy.cfg", "frontend http_web bind *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind *:443 ssl crt /etc/ssl/private/example.com.pem default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:80 check server rgw2 10.0.0.80:80 check", "systemctl enable haproxy systemctl start haproxy", "ip addr show", "wget haproxy", "wget rgw1", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Owner> <ID>anonymous</ID> <DisplayName></DisplayName> </Owner> <Buckets> </Buckets> </ListAllMyBucketsResult>", "[client.rgw.<STATIC-SITE-HOSTNAME>] rgw_enable_static_website = true rgw_enable_apis = s3, s3website rgw_dns_name = objects-zonegroup.domain.com rgw_dns_s3website_name = objects-website-zonegroup.domain.com rgw_resolve_cname = true", "objects-zonegroup.domain.com. IN A 192.0.2.10 objects-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:10 *.objects-zonegroup.domain.com. IN CNAME objects-zonegroup.domain.com. objects-website-zonegroup.domain.com. IN A 192.0.2.20 objects-website-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:20", "*.objects-website-zonegroup.domain.com. IN CNAME objects-website-zonegroup.domain.com.", "http://bucket1.objects-website-zonegroup.domain.com", "www.example.com. IN CNAME bucket2.objects-website-zonegroup.domain.com.", "http://www.example.com", "www.example.com. IN CNAME www.example.com.objects-website-zonegroup.domain.com.", "http://www.example.com", "www.example.com. IN A 192.0.2.20 www.example.com. IN AAAA 2001:DB8::192:0:2:20", "http://www.example.com", "subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms", "subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms", "systemctl start rpcbind", "systemctl stop nfs-server.service systemctl disable nfs-server.service", "yum install nfs-ganesha-rgw", "scp <mon-host>:/etc/ceph/ceph.conf <nfs-ganesha-rgw-host>:/etc/ceph", "vim /etc/ganesha/ganesha.conf", "EXPORT { Export_ID={numeric-id}; Path = \"/\"; Pseudo = \"/\"; Access_Type = RW; SecType = \"sys\"; NFS_Protocols = 4; Transport_Protocols = TCP; Squash = No_Root_Squash; FSAL { Name = RGW; User_Id = {s3-user-id}; Access_Key_Id =\"{s3-access-key}\"; Secret_Access_Key = \"{s3-secret}\"; } }", "EXPORT { NFS_Protocols = 3,4; Transport_Protocols = UDP,TCP; }", "NFSV4 { Allow_Numeric_Owners = true; Only_Numeric_Owners = true; }", "NFS_CORE_PARAM{ mount_path_pseudo = true; }", "mount -o vers=3 <IP ADDRESS>:/export /mnt mount -o vers=4 <IP ADDRESS>:/export /mnt", "Path Pseudo Tag Mechanism Mount /export/test1 /export/test1 test1 v3 Pseudo mount -o vers=3 server:/export/test1 /export/test1 /export/test1 test1 v3 Tag mount -o vers=3 server:test1 /export/test1 /export/test1 test1 v4 Pseudo mount -o vers=4 server:/export/test1 / /export/ceph1 ceph1 v3 Pseudo mount -o vers=3 server:/export/ceph1 / /export/ceph1 ceph1 v3 Tag mount -o vers=3 server:ceph1 / /export/ceph1 ceph1 v4 Pseudo mount -o vers=4 server:/export/ceph1 / /export/ceph2 ceph2 v3 Pseudo mount -o vers=3 server:/export/ceph2 / /export/ceph2 ceph2 v3 Tag mount -o vers=3 server:ceph2 / /export/ceph2 ceph2 v4 Pseudo mount -o vers=4", "Path Pseudo Tag Mechanism Mount /export/test1 /export/test1 test1 v3 Path mount -o vers=3 server:/export/test1 /export/test1 /export/test1 test1 v3 Tag mount -o vers=3 server:test1 /export/test1 /export/test1 test1 v4 Pseudo mount -o vers=4 server:/export/test1 / /export/ceph1 ceph1 v3 Path mount -o vers=3 server:/ / /export/ceph1 ceph1 v3 Tag mount -o vers=3 server:ceph1 / /export/ceph1 ceph1 v4 Pseudo mount -o vers=4 server:/export/ceph1 / /export/ceph2 ceph2 v3 Path not accessible / /export/ceph2 ceph2 v3 Tag mount -o vers=3 server:ceph2 / /export/ceph2 ceph2 v4 Pseudo mount -o vers=4 server:/export/ceph2", "RGW { name = \"client.rgw.{instance-name}\"; ceph_conf = \"/etc/ceph/ceph.conf\"; init_args = \"--{arg}={arg-value}\"; }", "systemctl enable nfs-ganesha systemctl start nfs-ganesha", "rgw_nfs_s3_fast_attrs= true", "systemctl restart ceph-radosgw.target", "<ganesha-host-name>:/ <mount-point> nfs noauto,soft,nfsvers=4.1,sync,proto=tcp 0 0", "<ganesha-host-name>:/ <mount-point> nfs noauto,noacl,soft,nfsvers=3,sync,proto=tcp 0 0" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/object_gateway_configuration_and_administration_guide/rgw-configuration-rgw
Part II. Learn
Part II. Learn
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_interconnect/learn
Hosted control planes
Hosted control planes OpenShift Container Platform 4.18 Using hosted control planes with OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hosted_control_planes/index
Chapter 4. Configuring CPUs on Compute nodes
Chapter 4. Configuring CPUs on Compute nodes As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC). Use the following features to tune your instances for optimal CPU performance: CPU pinning : Pin virtual CPUs to physical CPUs. Emulator threads : Pin emulator threads associated with the instance to physical CPUs. CPU feature flags : Configure the standard set of CPU feature flags that are applied to instances to improve live migration compatibility across Compute nodes. 4.1. Configuring CPU pinning on Compute nodes You can configure each instance CPU process to run on a dedicated host CPU by enabling CPU pinning on the Compute nodes. When an instance uses CPU pinning, each instance vCPU process is allocated its own host pCPU that no other instance vCPU process can use. Instances that run on Compute nodes with CPU pinning enabled have a NUMA topology. Each NUMA node of the instance NUMA topology maps to a NUMA node on the host Compute node. You can configure the Compute scheduler to schedule instances with dedicated (pinned) CPUs and instances with shared (floating) CPUs on the same Compute node. To configure CPU pinning on Compute nodes that have a NUMA topology, you must complete the following: Designate Compute nodes for CPU pinning. Configure the Compute nodes to reserve host cores for pinned instance vCPU processes, floating instance vCPU processes, and host processes. Deploy the overcloud. Create a flavor for launching instances that require CPU pinning. Create a flavor for launching instances that use shared, or floating, CPUs. Note Configuring CPU pinning creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts. For more information, see Constraints when using NUMA . 4.1.1. Prerequisites You know the NUMA topology of your Compute node. You have configured NovaReservedHugePages on the Compute nodes. For more information, see Configuring huge pages on Compute nodes . 4.1.2. Designating Compute nodes for CPU pinning To designate Compute nodes for instances with pinned CPUs, you must create a new role file to configure the CPU pinning role, and configure the bare metal nodes with a CPU pinning resource class to use to tag the Compute nodes for CPU pinning. Note The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes . Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file named roles_data_cpu_pinning.yaml that includes the Controller , Compute , and ComputeCPUPinning roles, along with any other roles that you need for the overcloud: Open roles_data_cpu_pinning.yaml and edit or add the following parameters and sections: Section/Parameter Current value New value Role comment Role: Compute Role: ComputeCPUPinning Role name name: Compute name: ComputeCPUPinning description Basic Compute Node role CPU Pinning Compute Node role HostnameFormatDefault %stackname%-novacompute-%index% %stackname%-novacomputepinning-%index% deprecated_nic_config_name compute.yaml compute-cpu-pinning.yaml Register the CPU pinning Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml . For more information, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Inspect the node hardware: For more information, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide. Tag each bare metal node that you want to designate for CPU pinning with a custom CPU pinning resource class: Replace <node> with the ID of the bare metal node. Add the ComputeCPUPinning role to your node definition file, overcloud-baremetal-deploy.yaml , and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes: 1 You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide. If you do not define the network definitions by using the network_config property, then the default network definitions are used. For more information about the properties you can use to configure node attributes in your node definition file, see Bare metal node provisioning attributes . For an example node definition file, see Example node definition file . Run the provisioning command to provision the new nodes for your role: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud . Include the --network-config optional argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook. If you do not define the network definitions by using the network_config property, then the default network definitions are used. Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active : If you did not run the provisioning command with the --network-config option, then configure the <Role>NetworkConfigTemplate parameters in your network-environment.yaml file to point to your NIC template files: Replace <cpu_pinning_net_top> with the name of the file that contains the network topology of the ComputeCPUPinning role, for example, compute.yaml to use the default network topology. 4.1.3. Configuring Compute nodes for CPU pinning Configure CPU pinning on your Compute nodes based on the NUMA topology of the nodes. Reserve some CPU cores across all the NUMA nodes for the host processes for efficiency. Assign the remaining CPU cores to managing your instances. This procedure uses the following NUMA topology, with eight CPU cores spread across two NUMA nodes, to illustrate how to configure CPU pinning: Table 4.1. Example of NUMA Topology NUMA Node 0 NUMA Node 1 Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 The procedure reserves cores 0 and 4 for host processes, cores 1, 3, 5 and 7 for instances that require CPU pinning, and cores 2 and 6 for floating instances that do not require CPU pinning. Procedure Create an environment file to configure Compute nodes to reserve cores for pinned instances, floating instances, and host processes, for example, cpu_pinning.yaml . To schedule instances with a NUMA topology on NUMA-capable Compute nodes, add NUMATopologyFilter to the NovaSchedulerEnabledFilters parameter in your Compute environment file, if not already present: For more information on NUMATopologyFilter , see Compute scheduler filters . To reserve physical CPU cores for the dedicated instances, add the following configuration to cpu_pinning.yaml : To reserve physical CPU cores for the shared instances, add the following configuration to cpu_pinning.yaml : If you are not using file-backed memory, specify the amount of RAM to reserve for host processes: Replace <ram> with the amount of RAM to reserve in MB. To ensure that host processes do not run on the CPU cores reserved for instances, set the parameter IsolCpusList to the CPU cores you have reserved for instances: Specify the value of the IsolCpusList parameter using a list, or ranges, of CPU indices separated by a comma. Add your new files to the stack with your other environment files and deploy the overcloud: 4.1.4. Creating a dedicated CPU flavor for instances To enable your cloud users to create instances that have dedicated CPUs, you can create a flavor with a dedicated CPU policy for launching instances. Prerequisites Simultaneous multithreading (SMT) is enabled on the host. The Compute node is configured to allow CPU pinning. For more information, see Configuring CPU pinning on the Compute nodes . Procedure Source the overcloudrc file: Create a flavor for instances that require CPU pinning: To request pinned CPUs, set the hw:cpu_policy property of the flavor to dedicated : If you are not using file-backed memory, set the hw:mem_page_size property of the flavor to enable NUMA-aware memory allocation: Replace <page_size> with one of the following valid values: large : Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. small : (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any : Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. <pagesize> : Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB. Note To set hw:mem_page_size to small or any , you must have configured the amount of memory pages to reserve on each NUMA node for processes that are not instances. For more information, see Configuring huge pages on Compute nodes . To place each vCPU on thread siblings, set the hw:cpu_thread_policy property of the flavor to require : Note If the host does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling fails. To prevent this, set hw:cpu_thread_policy to prefer instead of require . The prefer policy is the default policy that ensures that thread siblings are used when available. If you use hw:cpu_thread_policy=isolate , you must have SMT disabled or use a platform that does not support SMT. Verification To verify the flavor creates an instance with dedicated CPUs, use your new flavor to launch an instance: 4.1.5. Creating a shared CPU flavor for instances To enable your cloud users to create instances that use shared, or floating, CPUs, you can create a flavor with a shared CPU policy for launching instances. Prerequisites The Compute node is configured to reserve physical CPU cores for the shared CPUs. For more information, see Configuring CPU pinning on the Compute nodes . Procedure Source the overcloudrc file: Create a flavor for instances that do not require CPU pinning: To request floating CPUs, set the hw:cpu_policy property of the flavor to shared : If you are not using file-backed memory, set the hw:mem_page_size property of the flavor to enable NUMA-aware memory allocation: Replace <page_size> with one of the following valid values: large : Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. small : (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any : Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. <pagesize> : Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB. Note To set hw:mem_page_size to small or any , you must have configured the amount of memory pages to reserve on each NUMA node for processes that are not instances. For more information, see Configuring huge pages on Compute nodes . 4.1.6. Creating a mixed CPU flavor for instances To enable your cloud users to create instances that have a mix of dedicated and shared CPUs, you can create a flavor with a mixed CPU policy for launching instances. Procedure Source the overcloudrc file: Create a flavor for instances that require a mixed of dedicated and shared CPUs: Specify which CPUs must be dedicated or shared: Replace <CPU_number> with the CPUs that must be either dedicated or shared: To specify dedicated CPUs, specify the CPU number or CPU range. For example, set the property to 2-3 to specify that CPUs 2 and 3 are dedicated and all the remaining CPUs are shared. To specify shared CPUs, prepend the CPU number or CPU range with a caret (^). For example, set the property to ^0-1 to specify that CPUs 0 and 1 are shared and all the remaining CPUs are dedicated. If you are not using file-backed memory, set the hw:mem_page_size property of the flavor to enable NUMA-aware memory allocation: Replace <page_size> with one of the following valid values: large : Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. small : (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any : Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. <pagesize> : Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB. Note To set hw:mem_page_size to small or any , you must have configured the amount of memory pages to reserve on each NUMA node for processes that are not instances. For more information, see Configuring huge pages on Compute nodes . 4.1.7. Configuring CPU pinning on Compute nodes with simultaneous multithreading (SMT) If a Compute node supports simultaneous multithreading (SMT), group thread siblings together in either the dedicated or the shared set. Thread siblings share some common hardware which means it is possible for a process running on one thread sibling to impact the performance of the other thread sibling. For example, the host identifies four logical CPU cores in a dual core CPU with SMT: 0, 1, 2, and 3. Of these four, there are two pairs of thread siblings: Thread sibling 1: logical CPU cores 0 and 2 Thread sibling 2: logical CPU cores 1 and 3 In this scenario, do not assign logical CPU cores 0 and 1 as dedicated and 2 and 3 as shared. Instead, assign 0 and 2 as dedicated and 1 and 3 as shared. The files /sys/devices/system/cpu/cpuN/topology/thread_siblings_list , where N is the logical CPU number, contain the thread pairs. You can use the following command to identify which logical CPU cores are thread siblings: The following output indicates that logical CPU core 0 and logical CPU core 2 are threads on the same core: 4.1.8. Additional resources Discovering your NUMA node topology 4.2. Configuring emulator threads Compute nodes have overhead tasks associated with the hypervisor for each instance, known as emulator threads. By default, emulator threads run on the same CPUs as the instance, which impacts the performance of the instance. You can configure the emulator thread policy to run emulator threads on separate CPUs to those the instance uses. Note To avoid packet loss, you must never preempt the vCPUs in an NFV deployment. Prerequisites CPU pinning must be enabled. Procedure Log in to the undercloud as the stack user. Open your Compute environment file. To reserve physical CPU cores for instances that require CPU pinning, configure the NovaComputeCpuDedicatedSet parameter in the Compute environment file. For example, the following configuration sets the dedicated CPUs on a Compute node with a 32-core CPU: For more information, see Configuring CPU pinning on the Compute nodes . To reserve physical CPU cores for the emulator threads, configure the NovaComputeCpuSharedSet parameter in the Compute environment file. For example, the following configuration sets the shared CPUs on a Compute node with a 32-core CPU: Note The Compute scheduler also uses the CPUs in the shared set for instances that run on shared, or floating, CPUs. For more information, see Configuring CPU pinning on Compute nodes Add the Compute scheduler filter NUMATopologyFilter to the NovaSchedulerEnabledFilters parameter, if not already present. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Configure a flavor that runs emulator threads for the instance on a dedicated CPU, which is selected from the shared CPUs configured using NovaComputeCpuSharedSet : For more information about configuration options for hw:emulator_threads_policy , see Emulator threads policy in Flavor metadata . 4.3. Configuring CPU feature flags for instances You can enable or disable CPU feature flags for an instance without changing the settings on the host Compute node and rebooting the Compute node. By configuring the standard set of CPU feature flags that are applied to instances, you are helping to achieve live migration compatibility across Compute nodes. You are also helping to manage the performance and security of the instances, by disabling flags that have a negative impact on the security or performance of the instances with a particular CPU model, or enabling flags that provide mitigation from a security problem or alleviates performance problems. 4.3.1. Configuring CPU feature flags for instances Configure the Compute service to apply CPU feature flags to instances with specific vCPU models. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Open your Compute environment file. Configure the instance CPU mode: Replace <cpu_mode> with the CPU mode of each instance on the Compute node. Set to one of the following valid values: host-model : (Default) Use the CPU model of the host Compute node. Use this CPU mode to automatically add critical CPU flags to the instance to provide mitigation from security flaws. custom : Use to configure the specific CPU models each instance should use. Note You can also set the CPU mode to host-passthrough to use the same CPU model and feature flags as the Compute node for the instances hosted on that Compute node. Optional: If you set NovaLibvirtCPUMode to custom , configure the instance CPU models that you want to customise: Replace <cpu_model> with a list of the CPU models that the host supports. List the CPU models in order, placing the more common and less advanced CPU models first in the list, and the more feature-rich CPU models last, for example: For a list of model names, see the /usr/share/libvirt/cpu_map.xml file, or use one of the following commands on the host Compute node: For a RHEL version 8.4 Compute node: For a RHEL version 9.2 Compute node: Replace <arch> with the name of the architecture of the Compute node, for example, x86_64 . Configure the CPU feature flags for instances with the specified CPU models: Replace <cpu_feature_flags> with a comma-separated list of feature flags to enable or disable. Prefix each flag with "+" to enable the flag, or "-" to disable it. If a prefix is not specified, the flag is enabled. For a list of the available feature flags for a given CPU model, see /usr/share/libvirt/cpu_map/*.xml . The following example enables the CPU feature flags pcid and ssbd for the IvyBridge and Cascadelake-Server models, and disables the feature flag mtrr . Add your Compute environment file to the stack with your other environment files and deploy the overcloud:
[ "[stack@director ~]USD source ~/stackrc", "(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_cpu_pinning.yaml Compute:ComputeCPUPinning Compute Controller", "(undercloud)USD openstack overcloud node introspect --all-manageable --provide", "(undercloud)USD openstack baremetal node set --resource-class baremetal.CPU-PINNING <node>", "- name: Controller count: 3 - name: Compute count: 3 - name: ComputeCPUPinning count: 1 defaults: resource_class: baremetal.CPU-PINNING network_config: template: /home/stack/templates/nic-config/myRoleTopology.j2 1", "(undercloud)USD openstack overcloud node provision --stack <stack> [--network-config \\] --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml", "(undercloud)USD watch openstack baremetal node list", "parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeCPUPinningNetworkConfigTemplate: /home/stack/templates/nic-configs/<cpu_pinning_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2", "parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter", "parameter_defaults: ComputeCPUPinningParameters: NovaComputeCpuDedicatedSet: 1,3,5,7", "parameter_defaults: ComputeCPUPinningParameters: NovaComputeCpuSharedSet: 2,6", "parameter_defaults: ComputeCPUPinningParameters: NovaReservedHugePages: <ram>", "parameter_defaults: ComputeCPUPinningParameters: IsolCpusList: 1-3,5-7", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_cpu_pinning.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/cpu_pinning.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml", "(undercloud)USD source ~/overcloudrc", "(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> pinned_cpus", "(overcloud)USD openstack flavor set --property hw:cpu_policy=dedicated pinned_cpus", "(overcloud)USD openstack flavor set --property hw:mem_page_size=<page_size> pinned_cpus", "(overcloud)USD openstack flavor set --property hw:cpu_thread_policy=require pinned_cpus", "(overcloud)USD openstack server create --flavor pinned_cpus --image <image> pinned_cpu_instance", "(undercloud)USD source ~/overcloudrc", "(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> floating_cpus", "(overcloud)USD openstack flavor set --property hw:cpu_policy=shared floating_cpus", "(overcloud)USD openstack flavor set --property hw:mem_page_size=<page_size> pinned_cpus", "(undercloud)USD source ~/overcloudrc", "(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <number_of_reserved_vcpus> --property hw:cpu_policy=mixed mixed_CPUs_flavor", "(overcloud)USD openstack flavor set --property hw:cpu_dedicated_mask=<CPU_number> mixed_CPUs_flavor", "(overcloud)USD openstack flavor set --property hw:mem_page_size=<page_size> pinned_cpus", "grep -H . /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -n -t ':' -k 2 -u", "/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0,2 /sys/devices/system/cpu/cpu2/topology/thread_siblings_list:1,3", "parameter_defaults: NovaComputeCpuDedicatedSet: 2-15,18-31", "parameter_defaults: NovaComputeCpuSharedSet: 0,1,16,17", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "(overcloud)USD openstack flavor set --property hw:cpu_policy=dedicated --property hw:emulator_threads_policy=share dedicated_emulator_threads", "[stack@director ~]USD source ~/stackrc", "parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: <cpu_mode>", "parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: <cpu_model>", "NovaLibvirtCPUModels: - SandyBridge - IvyBridge - Haswell-noTSX-IBRS", "sudo podman exec -it nova_libvirt virsh cpu-models <arch>", "sudo podman exec -it nova_virtqemud virsh cpu-models <arch>", "parameter_defaults: ComputeParameters: NovaLibvirtCPUModelExtraFlags: <cpu_feature_flags>", "parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: - IvyBridge - Cascadelake-Server NovaLibvirtCPUModelExtraFlags: 'pcid,+ssbd,-mtrr'", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-cpus-on-compute-nodes
Chapter 3. User tasks
Chapter 3. User tasks 3.1. Creating applications from installed Operators This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. 3.1.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.18 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.2. Installing Operators in your namespace If a cluster administrator has delegated Operator installation permissions to your account, you can install and subscribe an Operator to your namespace in a self-service manner. 3.2.1. Prerequisites A cluster administrator must add certain permissions to your OpenShift Container Platform user account to allow self-service Operator installation to a namespace. See Allowing non-cluster administrators to install Operators for details. 3.2.2. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a user with the proper permissions, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose a specific namespace in which to install the Operator. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Understanding OperatorHub 3.2.3. Installing from OperatorHub by using the web console You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page, configure your Operator installation: If you want to install a specific version of an Operator, select an Update channel and Version from the lists. You can browse the various versions of an Operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install. Note The version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise, Manual approval is required when not installing the latest version for the selected channel. Installing an Operator with Manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. If you want to update Operators independently, install Operators into separate namespaces. Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. For clusters on cloud providers with token authentication enabled: If the cluster uses AWS Security Token Service ( STS Mode in the web console), enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field. To create the role's ARN, follow the procedure described in Preparing AWS account . If the cluster uses Microsoft Entra Workload ID ( Workload Identity / Federated Identity Mode in the web console), add the client ID, tenant ID, and subscription ID in the appropriate fields. If the cluster uses Google Cloud Platform Workload Identity ( GCP Workload Identity / Federated Identity Mode in the web console), add the project number, pool ID, provider ID, and service account email in the appropriate fields. For Update approval , select either the Automatic or Manual approval strategy. Important If the web console shows that the cluster uses AWS STS, Microsoft Entra Workload ID, or GCP Workload Identity, you must set Update approval to Manual . Subscriptions with automatic approvals for updates are not recommended because there might be permission changes to make before updating. Subscriptions with manual approvals for updates ensure that administrators have the opportunity to verify the permissions of the later version, take any necessary steps, and then update. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster: If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. Verification After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should eventually resolve to Succeeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to Succeeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. When the Operator is installed, the metadata indicates which channel and version are installed. Note The Channel and Version dropdown menus are still available for viewing other version metadata in this catalog context. 3.2.4. Installing from OperatorHub by using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object. For SingleNamespace install mode, you must also ensure an appropriate Operator group exists in the related namespace. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. Tip In most cases, the web console method of this procedure is preferred because it automates tasks in the background, such as handling the creation of OperatorGroup and Subscription objects automatically when choosing SingleNamespace mode. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. You have installed the OpenShift CLI ( oc ). Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example 3.1. Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m # ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m # ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m # ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace Example 3.2. Example output # ... Kind: PackageManifest # ... Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces # ... Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 # ... Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4 1 Indicates which install modes are supported. 2 3 Example channel names. 4 The channel selected by default if one is not specified. Tip You can print an Operator's version and channel information in YAML format by running the following command: USD oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog: USD oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yaml Important If you do not specify the Operator's catalog, running the oc get packagemanifest and oc describe packagemanifest commands might return a package from an unexpected catalog if the following conditions are met: Multiple catalogs are installed in the same namespace. The catalogs contain the same Operators or Operators with the same name. If the Operator you intend to install supports the AllNamespaces install mode, and you choose to use this mode, skip this step, because the openshift-operators namespace already has an appropriate Operator group in place by default, called global-operators . If the Operator you intend to install supports the SingleNamespace install mode, and you choose to use this mode, you must ensure an appropriate Operator group exists in the related namespace. If one does not exist, you can create create one by following these steps: Important You can only have one Operator group per namespace. For more information, see "Operator groups". Create an OperatorGroup object YAML file, for example operatorgroup.yaml , for SingleNamespace install mode: Example OperatorGroup object for SingleNamespace install mode apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2 1 2 For SingleNamespace install mode, use the same <namespace> value for both the metadata.namespace and spec.targetNamespaces fields. Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object to subscribe a namespace to an Operator: Create a YAML file for the Subscription object, for example subscription.yaml : Note If you want to subscribe to a specific version of an Operator, set the startingCSV field to the desired version and set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For details, see the following "Example Subscription object with a specific starting Operator version". Example 3.3. Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. For SingleNamespace install mode usage, specify the relevant single namespace. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of environment variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate environment variables in the container. 8 The volumes parameter defines a list of volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Example 3.4. Example Subscription object with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. For clusters on cloud providers with token authentication enabled, such as Amazon Web Services (AWS) Security Token Service (STS), Microsoft Entra Workload ID, or Google Cloud Platform Workload Identity, configure your Subscription object by following these steps: Ensure the Subscription object is set to manual update approvals: Example 3.5. Example Subscription object with manual update approvals kind: Subscription # ... spec: installPlanApproval: Manual 1 1 Subscriptions with automatic approvals for updates are not recommended because there might be permission changes to make before updating. Subscriptions with manual approvals for updates ensure that administrators have the opportunity to verify the permissions of the later version, take any necessary steps, and then update. Include the relevant cloud provider-specific fields in the Subscription object's config section: If the cluster is in AWS STS mode, include the following fields: Example 3.6. Example Subscription object with AWS STS variables kind: Subscription # ... spec: config: env: - name: ROLEARN value: "<role_arn>" 1 1 Include the role ARN details. If the cluster is in Workload ID mode, include the following fields: Example 3.7. Example Subscription object with Workload ID variables kind: Subscription # ... spec: config: env: - name: CLIENTID value: "<client_id>" 1 - name: TENANTID value: "<tenant_id>" 2 - name: SUBSCRIPTIONID value: "<subscription_id>" 3 1 Include the client ID. 2 Include the tenant ID. 3 Include the subscription ID. If the cluster is in GCP Workload Identity mode, include the following fields: Example 3.8. Example Subscription object with GCP Workload Identity variables kind: Subscription # ... spec: config: env: - name: AUDIENCE value: "<audience_url>" 1 - name: SERVICE_ACCOUNT_EMAIL value: "<service_account_email>" 2 where: <audience> Created in GCP by the administrator when they set up GCP Workload Identity, the AUDIENCE value must be a preformatted URL in the following format: //iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id> <service_account_email> The SERVICE_ACCOUNT_EMAIL value is a GCP service account email that is impersonated during Operator operation, for example: <service_account_name>@<project_id>.iam.gserviceaccount.com Create the Subscription object by running the following command: USD oc apply -f subscription.yaml If you set the installPlanApproval field to Manual , manually approve the pending install plan to complete the Operator installation. For more information, see "Manually approving a pending Operator update". At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verification Check the status of the Subscription object for your installed Operator by running the following command: USD oc describe subscription <subscription_name> -n <namespace> If you created an Operator group for SingleNamespace install mode, check the status of the OperatorGroup object by running the following command: USD oc describe operatorgroup <operatorgroup_name> -n <namespace> Additional resources Operator groups Channel names Additional resources Manually approving a pending Operator update
[ "oc get csv", "oc policy add-role-to-user edit <user> -n <target_project>", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4", "oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml", "oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2", "kind: Subscription spec: installPlanApproval: Manual 1", "kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1", "kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3", "kind: Subscription spec: config: env: - name: AUDIENCE value: \"<audience_url>\" 1 - name: SERVICE_ACCOUNT_EMAIL value: \"<service_account_email>\" 2", "//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>", "<service_account_name>@<project_id>.iam.gserviceaccount.com", "oc apply -f subscription.yaml", "oc describe subscription <subscription_name> -n <namespace>", "oc describe operatorgroup <operatorgroup_name> -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operators/user-tasks
4.6. RHEA-2012:0840 - new packages: java-1.7.0-ibm
4.6. RHEA-2012:0840 - new packages: java-1.7.0-ibm New java-1.7.0-ibm packages are now available for Red Hat Enterprise Linux 6. The java-1.7.0-ibm packages provide the IBM Java 7 Runtime Environment and the IBM Java 7 Software Development Kit. This update adds the java-1.7.0-ibm packages to Red Hat Enterprise Linux 6. (BZ# 693783 ) Note: Before applying this update, make sure that any IBM Java packages have been removed. All users who require java-1.7.0-ibm should install these new packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rhea-2012-0840
Chapter 3. Creating a Virtual Machine
Chapter 3. Creating a Virtual Machine After you have installed the virtualization packages on your Red Hat Enterprise Linux 7 host system, you can create virtual machines and install guest operating systems using the virt-manager interface. Alternatively, you can use the virt-install command-line utility by a list of parameters or with a script. Both methods are covered by this chapter. 3.1. Guest Virtual Machine Deployment Considerations Various factors should be considered before creating any guest virtual machines. The role of a virtual machine should be evaluated before deployment, but regular monitoring and assessment based on variable factors (load, amount of clients) should also be performed. The factors include: Performance Guest virtual machines should be deployed and configured based on their intended tasks. Some guest systems (for instance, guests running a database server) may require special performance considerations. Guests may require more assigned CPUs or memory based on their role and projected system load. Input/Output requirements and types of Input/Output Some guest virtual machines may have a particularly high I/O requirement or may require further considerations or projections based on the type of I/O (for instance, typical disk block size access, or the amount of clients). Storage Some guest virtual machines may require higher priority access to storage or faster disk types, or may require exclusive access to areas of storage. The amount of storage used by guests should also be regularly monitored and taken into account when deploying and maintaining storage. Make sure to read all the considerations outlined in Red Hat Enterprise Linux 7 Virtualization Security Guide . It is also important to understand that your physical storage may limit your options in your virtual storage. Networking and network infrastructure Depending upon your environment, some guest virtual machines could require faster network links than other guests. Bandwidth or latency are often factors when deploying and maintaining guests, especially as requirements or load changes. Request requirements SCSI requests can only be issued to guest virtual machines on virtio drives if the virtio drives are backed by whole disks, and the disk device parameter is set to lun in the domain XML file , as shown in the following example:
[ "<devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='block' device='lun'>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-virtual_machine_installation
Chapter 12. Troubleshooting
Chapter 12. Troubleshooting This section describes resources for troubleshooting the Migration Toolkit for Containers (MTC). For known issues, see the MTC release notes . 12.1. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.7 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volume claims that are linked to the persistent volumes of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. About MTC custom resources The Migration Toolkit for Containers (MTC) creates the following custom resources (CRs): MigCluster (configuration, MTC cluster): Cluster definition MigStorage (configuration, MTC cluster): Storage definition MigPlan (configuration, MTC cluster): Migration plan The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs. Note Deleting a MigPlan CR deletes the associated MigMigration CRs. BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR. Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster: Backup CR #1 for Kubernetes objects Backup CR #2 for PV data Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster: Restore CR #1 (using Backup CR #2) for PV data Restore CR #2 (using Backup CR #1) for Kubernetes objects 12.2. MTC custom resource manifests Migration Toolkit for Containers (MTC) uses the following custom resource (CR) manifests for migrating applications. 12.2.1. DirectImageMigration The DirectImageMigration CR copies images directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2 1 One or more namespaces containing images to be migrated. By default, the destination namespace has the same name as the source namespace. 2 Source namespace mapped to a destination namespace with a different name. 12.2.2. DirectImageStreamMigration The DirectImageStreamMigration CR copies image stream references directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace> 12.2.3. DirectVolumeMigration The DirectVolumeMigration CR copies persistent volumes (PVs) directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration 1 Set to true to create namespaces for the PVs on the destination cluster. 2 Set to true to delete DirectVolumeMigrationProgress CRs after migration. The default is false so that DirectVolumeMigrationProgress CRs are retained for troubleshooting. 3 Update the cluster name if the destination cluster is not the host cluster. 4 Specify one or more PVCs to be migrated. 12.2.4. DirectVolumeMigrationProgress The DirectVolumeMigrationProgress CR shows the progress of the DirectVolumeMigration CR. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration 12.2.5. MigAnalytic The MigAnalytic CR collects the number of images, Kubernetes resources, and the persistent volume (PV) capacity from an associated MigPlan CR. You can configure the data that it collects. apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true <.> analyzeK8SResources: true <.> analyzePVCapacity: true <.> listImages: false <.> listImagesLimit: 50 <.> migPlanRef: name: <migplan> namespace: openshift-migration <.> Optional: Returns the number of images. <.> Optional: Returns the number, kind, and API version of the Kubernetes resources. <.> Optional: Returns the PV capacity. <.> Returns a list of image names. The default is false so that the output is not excessively long. <.> Optional: Specify the maximum number of image names to return if listImages is true . 12.2.6. MigCluster The MigCluster CR defines a host, local, or remote cluster. apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: "1.0" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 # The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 # The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 # The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config 1 Update the cluster name if the migration-controller pod is not running on this cluster. 2 The migration-controller pod runs on this cluster if true . 3 Microsoft Azure only: Specify the resource group. 4 Optional: If you created a certificate bundle for self-signed CA certificates and if the insecure parameter value is false , specify the base64-encoded certificate bundle. 5 Set to true to disable SSL verification. 6 Set to true to validate the cluster. 7 Set to true to restart the Restic pods on the source cluster after the Stage pods are created. 8 Remote cluster and direct image migration only: Specify the exposed secure registry path. 9 Remote cluster only: Specify the URL. 10 Remote cluster only: Specify the name of the Secret object. 12.2.7. MigHook The MigHook CR defines a migration hook that runs custom code at a specified stage of the migration. You can create up to four migration hooks. Each hook runs during a different phase of the migration. You can configure the hook name, runtime duration, a custom image, and the cluster where the hook will run. The migration phases and namespaces of the hooks are configured in the MigPlan CR. apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7 1 Optional: A unique hash is appended to the value for this parameter so that each migration hook has a unique name. You do not need to specify the value of the name parameter. 2 Specify the migration hook name, unless you specify the value of the generateName parameter. 3 Optional: Specify the maximum number of seconds that a hook can run. The default is 1800 . 4 The hook is a custom image if true . The custom image can include Ansible or it can be written in a different programming language. 5 Specify the custom image, for example, quay.io/konveyor/hook-runner:latest . Required if custom is true . 6 Base64-encoded Ansible playbook. Required if custom is false . 7 Specify the cluster on which the hook will run. Valid values are source or destination . 12.2.8. MigMigration The MigMigration CR runs a MigPlan CR. You can configure a Migmigration CR to run a stage or incremental migration, to cancel a migration in progress, or to roll back a completed migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration 1 Set to true to cancel a migration in progress. 2 Set to true to roll back a completed migration. 3 Set to true to run a stage migration. Data is copied incrementally and the pods on the source cluster are not stopped. 4 Set to true to stop the application during migration. The pods on the source cluster are scaled to 0 after the Backup stage. 5 Set to true to retain the labels and annotations applied during the migration. 6 Set to true to check the status of the migrated pods on the destination cluster are checked and to return the names of pods that are not in a Running state. 12.2.9. MigPlan The MigPlan CR defines the parameters of a migration plan. You can configure destination namespaces, hook phases, and direct or indirect migration. Note By default, a destination namespace has the same name as the source namespace. If you configure a different destination namespace, you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges are copied during migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: "1.0" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12 1 The migration has completed if true . You cannot create another MigMigration CR for this MigPlan CR. 2 Optional: You can specify up to four migration hooks. Each hook must run during a different migration phase. 3 Optional: Specify the namespace in which the hook will run. 4 Optional: Specify the migration phase during which a hook runs. One hook can be assigned to one phase. Valid values are PreBackup , PostBackup , PreRestore , and PostRestore . 5 Optional: Specify the name of the MigHook CR. 6 Optional: Specify the namespace of MigHook CR. 7 Optional: Specify a service account with cluster-admin privileges. 8 Direct image migration is disabled if true . Images are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 9 Direct volume migration is disabled if true . PVs are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 10 Specify one or more source namespaces. If you specify only the source namespace, the destination namespace is the same. 11 Specify the destination namespace if it is different from the source namespace. 12 The MigPlan CR is validated if true . 12.2.10. MigStorage The MigStorage CR describes the object storage for the replication repository. Amazon Web Services (AWS), Microsoft Azure, Google Cloud Storage, Multi-Cloud Object Gateway, and generic S3-compatible cloud storage are supported. AWS and the snapshot copy method have additional parameters. apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: "1.0" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11 1 Specify the storage provider. 2 Snapshot copy method only: Specify the storage provider. 3 AWS only: Specify the bucket name. 4 AWS only: Specify the bucket region, for example, us-east-1 . 5 Specify the name of the Secret object that you created for the storage. 6 AWS only: If you are using the AWS Key Management Service, specify the unique identifier of the key. 7 AWS only: If you granted public access to the AWS bucket, specify the bucket URL. 8 AWS only: Specify the AWS signature version for authenticating requests to the bucket, for example, 4 . 9 Snapshot copy method only: Specify the geographical region of the clusters. 10 Snapshot copy method only: Specify the name of the Secret object that you created for the storage. 11 Set to true to validate the cluster. 12.3. Logs and debugging tools This section describes logs and debugging tools that you can use for troubleshooting. 12.3.1. Viewing migration plan resources You can view migration plan resources to monitor a running migration or to troubleshoot a failed migration by using the MTC web console and the command line interface (CLI). Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan to view the Migrations page. Click a migration to view the Migration details . Expand Migration resources to view the migration resources and their status in a tree view. Note To troubleshoot a failed migration, start with a high-level resource that has failed and then work down the resource tree towards the lower-level resources. Click the Options menu to a resource and select one of the following options: Copy oc describe command copies the command to your clipboard. Log in to the relevant cluster and then run the command. The conditions and events of the resource are displayed in YAML format. Copy oc logs command copies the command to your clipboard. Log in to the relevant cluster and then run the command. If the resource supports log filtering, a filtered log is displayed. View JSON displays the resource data in JSON format in a web browser. The data is the same as the output for the oc get <resource> command. 12.3.2. Viewing a migration plan log You can view an aggregated log for a migration plan. You use the MTC web console to copy a command to your clipboard and then run the command from the command line interface (CLI). The command displays the filtered logs of the following pods: Migration Controller Velero Restic Rsync Stunnel Registry Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan. Click View logs . Click the Copy icon to copy the oc logs command to your clipboard. Log in to the relevant cluster and enter the command on the CLI. The aggregated log for the migration plan is displayed. 12.3.3. Using the migration log reader You can use the migration log reader to display a single filtered view of all the migration logs. Procedure Get the mig-log-reader pod: USD oc -n openshift-migration get pods | grep log Enter the following command to display a single migration log: USD oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1 1 The -c plain option displays the log without colors. 12.3.4. Accessing performance metrics The MigrationController custom resource (CR) records metrics and pulls them into on-cluster monitoring storage. You can query the metrics by using Prometheus Query Language (PromQL) to diagnose migration performance issues. All metrics are reset when the Migration Controller pod restarts. You can access the performance metrics and run queries by using the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Monitoring Metrics . Enter a PromQL query, select a time window to display, and click Run Queries . If your web browser does not display all the results, use the Prometheus console. 12.3.4.1. Provided metrics The MigrationController custom resource (CR) provides metrics for the MigMigration CR count and for its API requests. 12.3.4.1.1. cam_app_workload_migrations This metric is a count of MigMigration CRs over time. It is useful for viewing alongside the mtc_client_request_count and mtc_client_request_elapsed metrics to collate API request information with migration status changes. This metric is included in Telemetry. Table 12.1. cam_app_workload_migrations metric Queryable label name Sample label values Label description status running , idle , failed , completed Status of the MigMigration CR type stage, final Type of the MigMigration CR 12.3.4.1.2. mtc_client_request_count This metric is a cumulative count of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 12.2. mtc_client_request_count metric Queryable label name Sample label values Label description cluster https://migcluster-url:443 Cluster that the request was issued against component MigPlan , MigCluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes kind the request was issued for 12.3.4.1.3. mtc_client_request_elapsed This metric is a cumulative latency, in milliseconds, of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 12.3. mtc_client_request_elapsed metric Queryable label name Sample label values Label description cluster https://cluster-url.com:443 Cluster that the request was issued against component migplan , migcluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes resource that the request was issued for 12.3.4.1.4. Useful queries The table lists some helpful queries that can be used for monitoring performance. Table 12.4. Useful queries Query Description mtc_client_request_count Number of API requests issued, sorted by request type sum(mtc_client_request_count) Total number of API requests issued mtc_client_request_elapsed API request latency, sorted by request type sum(mtc_client_request_elapsed) Total latency of API requests sum(mtc_client_request_elapsed) / sum(mtc_client_request_count) Average latency of API requests mtc_client_request_elapsed / mtc_client_request_count Average latency of API requests, sorted by request type cam_app_workload_migrations{status="running"} * 100 Count of running migrations, multiplied by 100 for easier viewing alongside request counts 12.3.5. Using the must-gather tool You can collect logs, metrics, and information about MTC custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can collect data for a one-hour or a 24-hour period and view the data with the Prometheus console. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: To collect data for the past hour: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 The data is saved as must-gather/must-gather.tar.gz . You can upload this file to a support case on the Red Hat Customer Portal . To collect data for the past 24 hours: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 \ -- /usr/bin/gather_metrics_dump This operation can take a long time. The data is saved as must-gather/metrics/prom_data.tar.gz . Viewing metrics data with the Prometheus console You can view the metrics data with the Prometheus console. Procedure Decompress the prom_data.tar.gz file: USD tar -xvzf must-gather/metrics/prom_data.tar.gz Create a local Prometheus instance: USD make prometheus-run The command outputs the Prometheus URL. Output Started Prometheus on http://localhost:9090 Launch a web browser and navigate to the URL to view the data by using the Prometheus web console. After you have viewed the data, delete the Prometheus instance and data: USD make prometheus-cleanup 12.3.6. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 12.3.7. Debugging a partial migration failure You can debug a partial migration failure warning message by using the Velero CLI to examine the Restore custom resource (CR) logs. A partial failure occurs when Velero encounters an issue that does not cause a migration to fail. For example, if a custom resource definition (CRD) is missing or if there is a discrepancy between CRD versions on the source and target clusters, the migration completes but the CR is not created on the target cluster. Velero logs the issue as a partial failure and then processes the rest of the objects in the Backup CR. Procedure Check the status of a MigMigration CR: USD oc get migmigration <migmigration> -o yaml Example output status: conditions: - category: Warn durable: true lastTransitionTime: "2021-01-26T20:48:40Z" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: "True" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: "2021-01-26T20:48:42Z" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: "True" type: SucceededWithWarnings Check the status of the Restore CR by using the Velero describe command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore describe <restore> Example output Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource Check the Restore CR logs by using the Velero logs command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore logs <restore> Example output time="2021-01-26T20:48:37Z" level=info msg="Attempting to restore migration-example: migration-example" logSource="pkg/restore/restore.go:1107" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time="2021-01-26T20:48:37Z" level=info msg="error restoring migration-example: the server could not find the requested resource" logSource="pkg/restore/restore.go:1170" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf The Restore CR log error message, the server could not find the requested resource , indicates the cause of the partially failed migration. 12.3.8. Using MTC custom resources for troubleshooting You can check the following Migration Toolkit for Containers (MTC) custom resources (CRs) to troubleshoot a failed migration: MigCluster MigStorage MigPlan BackupStorageLocation The BackupStorageLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 VolumeSnapshotLocation The VolumeSnapshotLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 MigMigration Backup MTC changes the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup CR contains an openshift.io/orig-reclaim-policy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Restore Procedure List the MigMigration CRs in the openshift-migration namespace: USD oc get migmigration -n openshift-migration Example output NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s Inspect the MigMigration CR: USD oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration The output is similar to the following examples. MigMigration example output name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none> Velero backup CR #2 example output that describes the PV data apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: "2019-08-29T01:03:15Z" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: "87313" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: "2019-08-29T01:02:36Z" errors: 0 expiration: "2019-09-28T01:02:35Z" phase: Completed startTimestamp: "2019-08-29T01:02:35Z" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0 Velero restore CR #2 example output that describes the Kubernetes resources apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: "2019-08-28T00:09:49Z" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: "82329" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: "" phase: Completed validationErrors: null warnings: 15 12.4. Common issues and concerns This section describes common issues and concerns that can cause issues during migration. 12.4.1. Updating deprecated internal images If your application uses images from the openshift namespace, the required versions of the images must be present on the target cluster. If an OpenShift Container Platform 3 image is deprecated in OpenShift Container Platform 4.7, you can manually update the image stream tag by using podman . Prerequisites You must have podman installed. You must be logged in as a user with cluster-admin privileges. If you are using insecure registries, add your registry host values to the [registries.insecure] section of /etc/container/registries.conf to ensure that podman does not encounter a TLS verification error. The internal registries must be exposed on the source and target clusters. Procedure Ensure that the internal registries are exposed on the OpenShift Container Platform 3 and 4 clusters. The internal registry is exposed by default on OpenShift Container Platform 4. If you are using insecure registries, add your registry host values to the [registries.insecure] section of /etc/container/registries.conf to ensure that podman does not encounter a TLS verification error. Log in to the OpenShift Container Platform 3 registry: USD podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port> Log in to the OpenShift Container Platform 4 registry: USD podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port> Pull the OpenShift Container Platform 3 image: USD podman pull <registry_url>:<port>/openshift/<image> Tag the OpenShift Container Platform 3 image for the OpenShift Container Platform 4 registry: USD podman tag <registry_url>:<port>/openshift/<image> \ 1 <registry_url>:<port>/openshift/<image> 2 1 Specify the registry URL and port for the OpenShift Container Platform 3 cluster. 2 Specify the registry URL and port for the OpenShift Container Platform 4 cluster. Push the image to the OpenShift Container Platform 4 registry: USD podman push <registry_url>:<port>/openshift/<image> 1 1 Specify the OpenShift Container Platform 4 cluster. Verify that the image has a valid image stream: USD oc get imagestream -n openshift | grep <image> Example output NAME IMAGE REPOSITORY TAGS UPDATED my_image image-registry.openshift-image-registry.svc:5000/openshift/my_image latest 32 seconds ago 12.4.2. Direct volume migration does not complete If direct volume migration does not complete, the target cluster might not have the same node-selector annotations as the source cluster. Migration Toolkit for Containers (MTC) migrates namespaces with all annotations to preserve security context constraints and scheduling requirements. During direct volume migration, MTC creates Rsync transfer pods on the target cluster in the namespaces that were migrated from the source cluster. If a target cluster namespace does not have the same annotations as the source cluster namespace, the Rsync transfer pods cannot be scheduled. The Rsync pods remain in a Pending state. You can identify and fix this issue by performing the following procedure. Procedure Check the status of the MigMigration CR: USD oc describe migmigration <pod> -n openshift-migration The output includes the following status message: Example output Some or all transfer pods are not running for more than 10 mins on destination cluster On the source cluster, obtain the details of a migrated namespace: USD oc get namespace <namespace> -o yaml 1 1 Specify the migrated namespace. On the target cluster, edit the migrated namespace: USD oc edit namespace <namespace> Add the missing openshift.io/node-selector annotations to the migrated namespace as in the following example: apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "region=east" ... Run the migration plan again. 12.4.3. Error messages and resolutions This section describes common error messages you might encounter with the Migration Toolkit for Containers (MTC) and how to resolve their underlying causes. 12.4.3.1. CA certificate error displayed when accessing the MTC console for the first time If the MTC console displays a CA certificate error message the first time you try to access it, the likely cause is that a cluster uses self-signed CA certificates. Navigate to the oauth-authorization-server URL in the error message and accept the certificate. To resolve this issue permanently, install the certificate authority so that it is trusted. If the browser displays an Unauthorized message after you have accepted the CA certificate, navigate to the MTC console and then refresh the web page. 12.4.3.2. OAuth timeout error in the MTC console If the MTC console displays a connection has timed out message after you have accepted a self-signed certificate, the cause is likely to be one of the following: Interrupted network access to the OAuth server Interrupted network access to the OpenShift Container Platform console Proxy configuration blocking access to the OAuth server. See MTC console inaccessible because of OAuth timeout error for details. To determine the cause: Inspect the MTC console web page with a browser web inspector. Check the Migration UI pod log for errors. 12.4.3.3. Certificate signed by unknown authority error If you use a self-signed certificate to secure a cluster or a replication repository for the Migration Toolkit for Containers (MTC), certificate verification might fail with the following error message: Certificate signed by unknown authority . You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository. Procedure Download a CA certificate from a remote endpoint and save it as a CA bundle file: USD echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2 1 Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443 . 2 Specify the name of the CA bundle file. 12.4.3.4. Backup storage location errors in the Velero pod log If a Velero Backup custom resource contains a reference to a backup storage location (BSL) that does not exist, the Velero pod log might display the following error messages: Error checking repository for stale locks Error getting backup storage location: backupstoragelocation.velero.io \"my-bsl\" not found You can ignore these error messages. A missing BSL cannot cause a migration to fail. 12.4.3.5. Pod volume backup timeout error in the Velero pod log If a migration fails because Restic times out, the Velero pod log displays the following error: level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/ heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/ velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1 The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages. Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Click Migration Toolkit for Containers Operator . In the MigrationController tab, click migration-controller . In the YAML tab, update the following parameter value: spec: restic_timeout: 1h 1 1 Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s . Click Save . 12.4.3.6. Restic verification errors in the MigMigration custom resource If data verification fails when migrating a persistent volume with the file system data copy method, the MigMigration CR displays the following error: MigMigration CR status status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: "True" type: ResticVerifyErrors 2 1 The error message identifies the Restore CR name. 2 ResticVerifyErrors is a general error warning type that includes verification errors. Note A data verification error does not cause the migration process to fail. You can check the Restore CR to troubleshoot the data verification error. Procedure Log in to the target cluster. View the Restore CR: USD oc describe <registry-example-migration-rvwcm> -n openshift-migration The output identifies the persistent volume with PodVolumeRestore errors. Example output status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration View the PodVolumeRestore CR: USD oc describe <migration-example-rvwcm-98t49> The output identifies the Restic pod that logged the errors. PodVolumeRestore CR with Restic pod error completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 ... resticPod: <restic-nr2v5> View the Restic pod log to locate the errors: USD oc logs -f <restic-nr2v5> 12.4.3.7. Restic permission error when migrating from NFS storage with root_squash enabled If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody and does not have permission to perform the migration. The Restic pod log displays the following error: Restic permission error backup=openshift-migration/<backup_id> controller=pod-volume-backup error="fork/exec /usr/bin/restic: permission denied" error.file="/go/src/github.com/vmware-tanzu/ velero/pkg/controller/pod_volume_backup_controller.go:280" error.function= "github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup" logSource="pkg/controller/pod_volume_backup_controller.go:280" name=<backup_id> namespace=openshift-migration You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the MigrationController CR manifest. Procedure Create a supplemental group for Restic on the NFS storage. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the restic_supplemental_groups parameter to the MigrationController CR manifest on the source and target clusters: spec: restic_supplemental_groups: <group_id> 1 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 12.4.4. Known issues This release has the following known issues: During migration, the Migration Toolkit for Containers (MTC) preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. ( BZ#1748440 ) Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. If a migration fails, the migration plan does not retain custom PV settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. ( BZ#1784899 ) If a large migration fails because Restic times out, you can increase the restic_timeout parameter value (default: 1h ) in the MigrationController custom resource (CR) manifest. If you select the data verification option for PVs that are migrated with the file system copy method, performance is significantly slower. If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody . The migration fails and a permission error is displayed in the Restic pod log. ( BZ#1873641 ) You can resolve this issue by adding supplemental groups for Restic to the MigrationController CR manifest: spec: ... restic_supplemental_groups: - 5555 - 6666 If you perform direct volume migration with nodes that are in different availability zones, the migration might fail because the migrated pods cannot access the PVC. ( BZ#1947487 ) 12.5. Rolling back a migration You can roll back a migration by using the MTC web console or the CLI. You can also roll back a migration manually . 12.5.1. Rolling back a migration by using the MTC web console You can roll back a migration by using the Migration Toolkit for Containers (MTC) web console. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure In the MTC web console, click Migration plans . Click the Options menu beside a migration plan and select Rollback under Migration . Click Rollback and wait for rollback to complete. In the migration plan details, Rollback succeeded is displayed. Verify that rollback was successful in the OpenShift Container Platform web console of the source cluster: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volume is correctly provisioned. 12.5.2. Rolling back a migration from the command line interface You can roll back a migration by creating a MigMigration custom resource (CR) from the command line interface. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure Create a MigMigration CR based on the following example: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: ... rollback: true ... migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF 1 Specify the name of the associated MigPlan CR. In the MTC web console, verify that the migrated project resources have been removed from the target cluster. Verify that the migrated project resources are present in the source cluster and that the application is running. 12.5.3. Rolling back a migration manually You can roll back a failed migration manually by deleting the stage pods and unquiescing the application. If you run the same migration plan successfully, the resources from the failed migration are deleted automatically. Note The following resources remain in the migrated namespaces after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. Procedure Delete the stage pods on all clusters: USD oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1 1 Namespaces specified in the MigPlan CR. Unquiesce the application on the source cluster by scaling the replicas to their premigration number: USD oc scale deployment <deployment> --replicas=<premigration_replicas> The migration.openshift.io/preQuiesceReplicas annotation in the Deployment CR displays the premigration number of replicas: apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" migration.openshift.io/preQuiesceReplicas: "1" Verify that the application pods are running on the source cluster: USD oc get pod -n <namespace> Additional resources Deleting Operators from a cluster using the web console
[ "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true <.> analyzeK8SResources: true <.> analyzePVCapacity: true <.> listImages: false <.> listImagesLimit: 50 <.> migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config", "apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12", "apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11", "oc -n openshift-migration get pods | grep log", "oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 -- /usr/bin/gather_metrics_dump", "tar -xvzf must-gather/metrics/prom_data.tar.gz", "make prometheus-run", "Started Prometheus on http://localhost:9090", "make prometheus-cleanup", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "oc get migmigration <migmigration> -o yaml", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>", "Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>", "time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "oc get migmigration -n openshift-migration", "NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s", "oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration", "name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0", "apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15", "podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>", "podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>", "podman pull <registry_url>:<port>/openshift/<image>", "podman tag <registry_url>:<port>/openshift/<image> \\ 1 <registry_url>:<port>/openshift/<image> 2", "podman push <registry_url>:<port>/openshift/<image> 1", "oc get imagestream -n openshift | grep <image>", "NAME IMAGE REPOSITORY TAGS UPDATED my_image image-registry.openshift-image-registry.svc:5000/openshift/my_image latest 32 seconds ago", "oc describe migmigration <pod> -n openshift-migration", "Some or all transfer pods are not running for more than 10 mins on destination cluster", "oc get namespace <namespace> -o yaml 1", "oc edit namespace <namespace>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"", "echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2", "Error checking repository for stale locks Error getting backup storage location: backupstoragelocation.velero.io \\\"my-bsl\\\" not found", "level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/ heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/ velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1", "spec: restic_timeout: 1h 1", "status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2", "oc describe <registry-example-migration-rvwcm> -n openshift-migration", "status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration", "oc describe <migration-example-rvwcm-98t49>", "completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>", "oc logs -f <restic-nr2v5>", "backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/ velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function= \"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration", "spec: restic_supplemental_groups: <group_id> 1", "spec: restic_supplemental_groups: - 5555 - 6666", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF", "oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1", "oc scale deployment <deployment> --replicas=<premigration_replicas>", "apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"", "oc get pod -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/migrating_from_version_3_to_4/troubleshooting-3-4
function::task_max_file_handles
function::task_max_file_handles Name function::task_max_file_handles - The max number of open files for the task. Synopsis Arguments task task_struct pointer. General Syntax task_max_file_handles:long(task:long) Description This function returns the maximum number of file handlers for the given task.
[ "function task_max_file_handles:long(task:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-max-file-handles
Chapter 16. Synchronizing Red Hat Directory Server with Microsoft Active Directory
Chapter 16. Synchronizing Red Hat Directory Server with Microsoft Active Directory Windows Synchronization carries over changes in a directory - adds, deletes, and changes in groups, users, and passwords - between Red Hat Directory Server and Microsoft Active Directory. This makes it much more efficient and effective to maintain consistent information across directories. 16.1. About Windows Synchronization Synchronization allows the user and group entries in Active Directory to be matched with the entries in the Red Hat Directory Server. As entries are created, modified, or deleted, the corresponding change is made to the sync peer server, allowing two-way synchronization of users, passwords, and groups. The synchronization process is analogous to the replication process: the synchronization is enabled by a plug-in, configured and initiated through a sync agreement, and record of directory changes is maintained and updates are sent according to that changelog. This synchronizes users and groups between Directory Server and a Windows server. Windows Synchronization has two parts, one for user and group entries and the other for passwords: Directory Server Windows Synchronization. Synchronization for user and group entries is configured in a synchronization agreement, much like replication is configured in a replication agreement. A sync agreement defines what kinds of entries are synchronized (users, groups, or both) and which direction changes are synchronized (from the Directory Server to Active Directory, from Active Directory to Directory Server, or both). The Directory Server relies on the Multi-Supplier Replication Plug-in to synchronize user and group entries. The same changelog that is used for multi-supplier replication is also used to send updates from the Directory Server to Active Directory as LDAP operations. The server also performs LDAP search operations against its Windows server to synchronize changes made to Windows entries to the corresponding Directory Server entry. Password Synchronization Service. If you set the nsslapd-unhashed-pw-switch parameter in the cn=config entry to on , password changes made on Directory Server are automatically synchronized over to Active Directory. However, there must be a special hook to recognize and transmit password changes on Active Directory over to Directory Server. This is done by the Password Synchronization Service. This application captures password changes on the Active Directory domain controller and sends them to the Directory Server over LDAPS. The Password Synchronization Service must be installed on every Active Directory domain controller. Figure 16.1. Active Directory - Directory Server Synchronization Process Synchronization is configured and controlled by one or more synchronization agreements , which establishes synchronization between sync peers , the directory servers being synchronized. These are similar in purpose to replication agreements and contain a similar set of information, including the host name (or IPv4 or IPv6 address) and port number for Active Directory. The Directory Server connects to its peer Windows server using LDAP/LDAPS to both send and receive updates. LDAP, a standard connection, can be used for syncing user and group entries alone, but to synchronize passwords, some sort of secure connection is required. If a secure connection is not used, the Windows domain will not accept password changes from the Directory Server and the Password Synchronization Service will not send passwords from the Active Directory domain to the Directory Server. Windows Synchronization allows both LDAPS using TLS and STARTTLS. Multiple subtree pairs can be configured to sync each other. Unlike replication, which connects databases , synchronization is between suffixes , parts of the directory tree structure. The synchronized Active Directory and Directory Server suffixes are both specified in the sync agreement. All entries within the respective subtrees are candidates for synchronization, including entries that are not immediate children of the specified suffix DN. Note Any descendant container entries need to be created separately in Active Directory by an administrator; Windows Synchronization does not create container entries. The Directory Server maintains a changelog , a database that records modifications that have occurred. The changelog is used by Windows Synchronization to coordinate and send changes made to the Active Directory peer. Changes to entries in Active Directory are found by using Active Directory's Dirsync search feature. Directory Server runs the Dirsync search periodically by default every five minutes to check for changes on the Active Directory server. You can change this default by setting the winSyncInterval parameter in the cn= syncAgreement_Name ,cn=WindowsReplica,cn= suffix_Name ,cn=mapping tree,cn=config entry. Using Dirsync ensures that only those entries that have changed since the search are retrieved. In some situations, such as when synchronization is configured or there have been major changes to directory data, a total update, or resynchronization , can be run. This examines every entry in both sync peers and sends any modifications or missing entries. A full Dirsync search is initiated whenever a total update is run. See Section 16.11, "Sending Synchronization Updates" for more information. Windows Synchronization provides some control over which entries are synchronized to grant administrators fine-grained control of the entries that are synchronized and to give sufficient flexibility to support different deployment scenarios. This control is set through different configuration attributes set in the Directory Server: When creating the sync agreement, there is an option to synchronizing new Windows entries ( nsDS7NewWinUserSyncEnabled and nsDS7NewWinGroupSyncEnabled ) as they are created. If these attributes are set to on , then existing Windows users/groups are synchronized to the Directory Server, and users/groups as they are created are synchronized to the Directory Server. Within the Windows subtree, only entries with user or group object classes can be synchronized to Directory Server. On the Directory Server, only entries with the ntUser or ntGroup object classes and attributes can be synchronized. The placement of the sync agreement depends on what suffixes are synchronized; for a single suffix, the sync agreement is made for that suffix alone; for multiple suffixes, the sync agreement is made at a higher branch of the directory tree. To propagate Windows entries and updates throughout the Directory Server deployment, make the agreement between a supplier in a multi-supplier replication environment, and use that supplier to replicate the changes across the Directory Server deployment, as shown in Figure 16.2, "Multi-Supplier Directory Server - Windows Domain Synchronization" . Important While it is possible to configure a sync agreement on a hub server, this only allows uni-directional synchronization, from Red Hat Directory Server to Active Directory. The Active Directory server cannot sync any changes back to the hub. It is strongly recommended that only suppliers in multi-supplier replication be used to configure synchronization agreements. Warning There can only be a single sync agreement between the Directory Server environment and the Active Directory environment. Multiple sync agreements to the same Active Directory domain can create entry conflicts. Figure 16.2. Multi-Supplier Directory Server - Windows Domain Synchronization Directory Server passwords are synchronized along with other entry attributes because plain-text passwords are retained in the Directory Server changelog. The Password Synchronization service is needed to catch password changes made on Active Directory. Without the Password Synchronization service, it would be impossible to have Windows passwords synchronized because passwords are hashed in Active Directory, and the Windows hashing function is incompatible with the one used by Directory Server.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Windows_Sync
34.2. Configuring Automount
34.2. Configuring Automount in Identity Management, configuring automount entries like locations and maps requires an existing autofs/NFS server. Creating automount entries does not create the underlying autofs configuration. Autofs can be configured manually using LDAP or SSSD as a data store, or it can be configured automatically. Note Before changing the automount configuration, test that for at least one user, their /home directory can be mounted from the command line successfully. Making sure that NFS is working properly makes it easier to troubleshoot any potential IdM automount configuration errors later. 34.2.1. Configuring NFS Automatically After a system is configured as an IdM client, which includes IdM servers and replicas that are configured as domain clients as part of their configuration, autofs can be configured to use the IdM domain as its NFS domain and have autofs services enabled. By default, the ipa-client-automount utility automatically configures the NFS configuration files, /etc/sysconfig/nfs and /etc/idmapd.conf . It also configures SSSD to manage the credentials for NFS. If the ipa-client-automount command is run without any options, it runs a DNS discovery scan to identify an available IdM server and creates a default location called default . It is possible to specify an IdM server to use and to create an automount location other than default: Along with setting up NFS, the ipa-client-automount utility configures SSSD to cache automount maps, in case the external IdM store is ever inaccessible. Configuring SSSD does two things: It adds service configuration information to the SSSD configuration. The IdM domain entry is given settings for the autofs provider and the mount location. And NFS is added to the list of supported services ( services = nss,pam,autofs... ) and given a blank configuration entry ( [autofs] ). The Name Service Switch (NSS) service information is updated to check SSSD first for automount information, and then the local files. There may be some instances, such as highly secure environments, where it is not appropriate for a client to cache automount maps. In that case, the ipa-client-automount command can be run with the --no-sssd option, which changes all of the required NFS configuration files, but does not change the SSSD configuration. If --no-sssd is used, the list of configuration files updated by ipa-client-automount is different: The command updates /etc/sysconfig/autofs instead of /etc/sysconfig/nfs . The command configures /etc/autofs_ldap_auth.conf with the IdM LDAP configuration. The command configures /etc/nsswitch.conf to use the LDAP services for automount maps. Note The ipa-client-automount command can only be run once. If there is an error in the configuration, than the configuration files need to be edited manually. 34.2.2. Configuring autofs Manually to Use SSSD and Identity Management Edit the /etc/sysconfig/autofs file to specify the schema attributes that autofs searches for: Specify the LDAP configuration. There are two ways to do this. The simplest is to let the automount service discover the LDAP server and locations on its own: Alternatively, explicitly set which LDAP server to use and the base DN for LDAP searches: Note The default value for location is default . If additional locations are added ( Section 34.5, "Configuring Locations" ), then the client can be pointed to use those locations, instead. Edit the /etc/autofs_ldap_auth.conf file so that autofs allows client authentication with the IdM LDAP server. Change authrequired to yes. Set the principal to the Kerberos host principal for the NFS client server, host/fqdn@REALM . The principal name is used to connect to the IdM directory as part of GSS client authentication. <autofs_ldap_sasl_conf usetls="no" tlsrequired="no" authrequired="yes" authtype="GSSAPI" clientprinc="host/[email protected]" /> If necessary, run klist -k to get the exact host principal information. Configure autofs as one of the services which SSSD manages. Open the SSSD configuration file. Add the autofs service to the list of services handled by SSSD. Create a new [autofs] section. This can be left blank; the default settings for an autofs service work with most infrastructures. Optionally, set a search base for the autofs entries. By default, this is the LDAP search base, but a subtree can be specified in the ldap_autofs_search_base parameter. Restart SSSD: Check the /etc/nsswitch.conf file, so that SSSD is listed as a source for automount configuration: Restart autofs: Test the configuration by listing a user's /home directory: If this does not mount the remote file system, check the /var/log/messages file for errors. If necessary, increase the debug level in the /etc/sysconfig/autofs file by setting the LOGGING parameter to debug . Note If there are problems with automount, then cross-reference the automount attempts with the 389 Directory Server access logs for the IdM instance, which will show the attempted access, user, and search base. It is also simple to run automount in the foreground with debug logging on. This prints the debug log information directly, without having to cross-check the LDAP access log with automount's log. 34.2.3. Configuring Automount on Solaris Note Solaris uses a different schema for autofs configuration than the schema used by Identity Management. Identity Management uses the 2307bis-style automount schema which is defined for 389 Directory Server (and used in IdM's internal Directory Server instance). If the NFS server is running on Red Hat Enterprise Linux, specify on the Solaris machine that NFSv3 is the maximum supported version. Edit the /etc/default/nfs file and set the following parameter: Use the ldapclient command to configure the host to use LDAP: Enable automount : Test the configuration. Check the LDAP configuration: List a user's /home directory:
[ "ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/nsswitch.conf Configured /etc/sysconfig/nfs Configured /etc/idmapd.conf Started rpcidmapd Started rpcgssd Restarting sssd, waiting for it to become available. Started autofs", "ipa-client-automount --server=ipaserver.example.com --location=boston", "autofs_provider = ipa ipa_automount_location = default", "automount: sss files", "ipa-client-automount --no-sssd", "# Other common LDAP naming # MAP_OBJECT_CLASS=\"automountMap\" ENTRY_OBJECT_CLASS=\"automount\" MAP_ATTRIBUTE=\"automountMapName\" ENTRY_ATTRIBUTE=\"automountKey\" VALUE_ATTRIBUTE=\"automountInformation\"", "LDAP_URI=\"ldap:///dc=example,dc=com\"", "LDAP_URI=\"ldap://ipa.example.com\" SEARCH_BASE=\"cn= location ,cn=automount,dc=example,dc=com\"", "<autofs_ldap_sasl_conf usetls=\"no\" tlsrequired=\"no\" authrequired=\"yes\" authtype=\"GSSAPI\" clientprinc=\"host/[email protected]\" />", "vim /etc/sssd/sssd.conf", "[sssd] services = nss,pam, autofs", "[nss] [pam] [sudo] [autofs] [ssh] [pac]", "[domain/EXAMPLE] ldap_search_base = \"dc=example,dc=com\" ldap_autofs_search_base = \"ou=automount,dc=example,dc=com\"", "systemctl restart sssd.service", "automount: sss files", "systemctl restart autofs.service", "ls /home/ userName", "automount -f -d", "NFS_CLIENT_VERSMAX=3", "ldapclient -v manual -a authenticationMethod=none -a defaultSearchBase=dc=example,dc=com -a defaultServerList=ipa.example.com -a serviceSearchDescriptor=passwd:cn=users,cn=accounts,dc=example,dc=com -a serviceSearchDescriptor=group:cn=groups,cn=compat,dc=example,dc=com -a serviceSearchDescriptor=auto_master:automountMapName=auto.master,cn= location ,cn=automount,dc=example,dc=com?one -a serviceSearchDescriptor=auto_home:automountMapName=auto_home,cn= location ,cn=automount,dc=example,dc=com?one -a objectClassMap=shadow:shadowAccount=posixAccount -a searchTimelimit=15 -a bindTimeLimit=5", "svcadm enable svc:/system/filesystem/autofs", "ldapclient -l auto_master dn: automountkey=/home,automountmapname=auto.master,cn= location ,cn=automount,dc=example,dc=com objectClass: automount objectClass: top automountKey: /home automountInformation: auto.home", "ls /home/ userName" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/configuring-automount
9.4.4. Using LDAP to Store Automounter Maps
9.4.4. Using LDAP to Store Automounter Maps LDAP client libraries must be installed on all systems configured to retrieve automounter maps from LDAP. On Red Hat Enterprise Linux, the openldap package should be installed automatically as a dependency of the automounter . To configure LDAP access, modify /etc/openldap/ldap.conf . Ensure that BASE, URI, and schema are set appropriately for your site. The most recently established schema for storing automount maps in LDAP is described by rfc2307bis . To use this schema it is necessary to set it in the autofs configuration /etc/autofs.conf by removing the comment characters from the schema definition. Example 9.4. Setting autofs configuration Note As of Red Hat Enterprise Linux 6.6, LDAP autofs is set in the /etc/autofs.conf file instead of the /etc/systemconfig/autofs file as was the case in releases. Ensure that these are the only schema entries not commented in the configuration. The automountKey replaces the cn attribute in the rfc2307bis schema. An LDIF of a sample configuration is described below: Example 9.5. LDIF configuration
[ "map_object_class = automountMap entry_object_class = automount map_attribute = automountMapName entry_attribute = automountKey value_attribute = automountInformation", "extended LDIF # LDAPv3 base <> with scope subtree filter: (&(objectclass=automountMap)(automountMapName=auto.master)) requesting: ALL # auto.master, example.com dn: automountMapName=auto.master,dc=example,dc=com objectClass: top objectClass: automountMap automountMapName: auto.master extended LDIF # LDAPv3 base <automountMapName=auto.master,dc=example,dc=com> with scope subtree filter: (objectclass=automount) requesting: ALL # /home, auto.master, example.com dn: automountMapName=auto.master,dc=example,dc=com objectClass: automount cn: /home automountKey: /home automountInformation: auto.home extended LDIF # LDAPv3 base <> with scope subtree filter: (&(objectclass=automountMap)(automountMapName=auto.home)) requesting: ALL # auto.home, example.com dn: automountMapName=auto.home,dc=example,dc=com objectClass: automountMap automountMapName: auto.home extended LDIF # LDAPv3 base <automountMapName=auto.home,dc=example,dc=com> with scope subtree filter: (objectclass=automount) requesting: ALL # foo, auto.home, example.com dn: automountKey=foo,automountMapName=auto.home,dc=example,dc=com objectClass: automount automountKey: foo automountInformation: filer.example.com:/export/foo /, auto.home, example.com dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com objectClass: automount automountKey: / automountInformation: filer.example.com:/export/&" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s2-nfs-config-autofs-ldap
Chapter 2. Storage
Chapter 2. Storage 2.1. Storage Domains Overview A storage domain is a collection of images that have a common storage interface. A storage domain contains complete images of templates and virtual machines (including snapshots), ISO files, and metadata about themselves. A storage domain can be made of either block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS, GlusterFS, or other POSIX compliant file systems). On NAS, all virtual disks, templates, and snapshots are files. On SAN (iSCSI/FCP), each virtual disk, template or snapshot is a logical volume. Block devices are aggregated into a logical entity called a volume group, and then divided by LVM (Logical Volume Manager) into logical volumes for use as virtual hard disks. See the Red Hat Enterprise Linux Logical Volume Manager Administration Guide for more information on LVM. Virtual disks can have one of two formats, either QCOW2 or raw. The type of storage can be either sparse or preallocated. Snapshots are always sparse but can be taken for disks of either format. Virtual machines that share the same storage domain can be migrated between hosts that belong to the same cluster.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/chap-storage
Chapter 5. Validating your OpenStack cloud with the Integration Test Suite (tempest)
Chapter 5. Validating your OpenStack cloud with the Integration Test Suite (tempest) You can run Integration Test Suite validations in many ways with the tempest run command. You can also combine multiple options in a single tempest run command. 5.1. Prerequisites An OpenStack environment that contains the Integration Test Suite packages. An Integration Test Suite configuration that corresponds to your OpenStack environment. For more information, see Creating a workspace . 5.2. Listing available tests Use the --list-tests option to list all available tests. Procedure Enter the tempest run command with either the --list-tests or -l options to get a list of available tempest tests: 5.3. Running smoke tests Smoke testing is a type of preliminary testing which covers only the most important functionality. Although these tests are not comprehensive, running smoke tests can save time if they do identify a problem. Procedure Enter the tempest run command with the --smoke option: 5.4. Passing tests by using allowlist files An allowlist file is a file that contains regular expressions to select tests that you want to include. If you use one or more regular expressions, specify each expression on a separate line. Procedure Enter the tempest run command with either the --whitelist-file or -w options to use an allowlist file: 5.5. Skipping tests by using blocklist files A blocklist file is a file that contains regular expressions to select tests that you want to exclude. If you use one or more regular expressions, specify each expression on a separate line. Procedure Enter the tempest run command with either the --blacklist-file or -b options to use a blocklist file: 5.6. Running tests in parallel or in series You can run tests in parallel, or in series. You can also define the number of workers that you want to use when you run parallel tests. By default, the Integration Test Suite uses one worker for each CPU available. Choose to run the tests serially or in parallel: Run the tests serially: Run the tests in parallel (default): Use the --concurrency or -c option to specify the number of workers to use when you run tests in parallel: 5.7. Running specific tests Run specific tests with the --regex option. The regular expression must be Python regular expression: Procedure Enter the following command: For example, use the following example command to run all tests that have names that begin with tempest.scenario : 5.8. Deleting Integration Test Suite objects Enter the tempest cleanup command to delete all Integration Test Suite (tempest) resources. This command also deletes projects, but the command does not delete the administrator account: Procedure Delete the tempest resources:
[ "tempest run -l", "tempest run --smoke", "tempest run -w <whitelist_file>", "tempest run -b <blacklist_file>", "tempest run --serial", "tempest run --parallel", "tempest run --concurrency <workers>", "tempest run --regex <regex>", "tempest run --regex ^tempest.scenario", "tempest cleanup --delete-tempest-conf-objects" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/validating_your_cloud_with_the_red_hat_openstack_platform_integration_test_suite/assembly_validating-your-openstack-cloud-with-the-integration-test-suite-tempest_tempest
Chapter 1. Reasons to optimize your overcloud
Chapter 1. Reasons to optimize your overcloud If you are planning to scale to or deploy a large overcloud, optimize your overcloud to prevent any potential performance issues as its workload increases. By following these recommendations, you can prevent scale from affecting the performance of the Telemetry service and the Object Storage service within the overcloud.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deployment_recommendations_for_specific_red_hat_openstack_platform_services/reasons-to-optimize-your-overcloud
Chapter 2. Installing the Red Hat Quay Operator from the OperatorHub
Chapter 2. Installing the Red Hat Quay Operator from the OperatorHub Use the following procedure to install the Red Hat Quay Operator from the OpenShift Container Platform OperatorHub. Procedure Using the OpenShift Container Platform console, select Operators OperatorHub . In the search box, type Red Hat Quay and select the official Red Hat Quay Operator provided by Red Hat. This directs you to the Installation page, which outlines the features, prerequisites, and deployment information. Select Install . This directs you to the Operator Installation page. The following choices are available for customizing the installation: Update Channel: Choose the update channel, for example, stable-3.12 for the latest release. Installation Mode: Choose All namespaces on the cluster if you want the Red Hat Quay Operator to be available cluster-wide. It is recommended that you install the Red Hat Quay Operator cluster-wide. If you choose a single namespace, the monitoring component will not be available by default. Choose A specific namespace on the cluster if you want it deployed only within a single namespace. Approval Strategy: Choose to approve either automatic or manual updates. Automatic update strategy is recommended. Select Install .
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-install
Chapter 51. Authentication and Interoperability
Chapter 51. Authentication and Interoperability Problem with importing a user certificate from CA over SSL The pki user-cert-add command provides an option to import a user certificate directly from CA. Due to incorrect client library initialization, when the command is executed over an SSL port, the command fails with the following error message: To work around this problem, download the certificate from CA into a file using the pki cert-show command. Then, upload the certificate from the file using the pki user-cert-add command. With the workaround, the user certificate is added correctly. (BZ#1246635) The IdM web UI displays all certificates on one page in the Certificates table The Certificates table, available under the Authentication tab in the Identity Management (IdM) web UI, ignores the page size limit of 20 entries. When more than 20 certificates are available, the table displays all the certificates on one page, instead of only displaying 20 certificates per page. (BZ# 1358836 ) Security warning when using ipa-kra-install , ipa-ca-install , or ipa-replica-install When using the ipa-kra-install , ipa-ca-install , and ipa-replica-install utilities to install an additional key recovery authority (KRA) component, certificate authority, or replica, the following warning appears: The error occurs due to RFC 2818, which deprecates the practice of carrying the subject host name in the subject distinguished name (DN) common name (CN) field. However, the three utilities succeed. Therefore, you can ignore the warning message. (BZ# 1358457 ) pam_pkcs11 only supports one token The PKCS#11 modules in the opensc and coolkey packages provide support for various types of smart cards. However the pam_pkcs11 module only supports one of them at a time. As a consequence, you cannot use PKCS#15 and CAC tokens using the same configuration. To work around the problem, install one of the following: the opensc package for PKCS#15 and PIV support the coolkey package for CAC, Coolkey, and PIV support (BZ# 1367919 ) Using ipa-ca-install on an IdM replica fails when the Directory Server is not configured with LDAPS Installing a certificate authority (CA) using the ipa-ca-install utility on an Identity Management (IdM) replica fails when the Directory Server on the replica is not configured with LDAPS (using the TLS protocol over port 636). The attempt fails with this error: Installing a replica in this situation is not possible. As a workaround, choose one of these options: Install the CA on the master server instead. Enable LDAPS on the replica manually before running ipa-ca-install . To manually enable LDAPS on the replica: 1. Export the server certificate from the /etc/httpd/alias file: Replace ca1/replica with the nickname of your certificate. 2. Remove the trust chain from certificate, because it was already imported: a. Extract the private key: b. Extract the public key: c. Create a PKCS#12 file without the CA certificate: Replace ca1/replica with the nickname of your certificate. 3. Import the created certificate into the Directory Server's NSSDB database: 4. Remove the temporary certificate files: 5. Create a file, /tmp/enable_ssl.ldif , with the following contents: 6. Modify the LDAP configuration to enable SSL: Replace dm_password with your Directory Manager password. 7. Create a file, /tmp/add_rsa.ldif , with the following contents: Replace ca1/replica with the nickname of your certificate. 8. Add the entry to the LDAP: Replace dm_password with your Directory Manager password. 9. Remove the temporary files: 10. Restart directory server: After following these steps, LDAPS is enabled, and you can successfuly run ipa-ca-install on the replica. (BZ# 1365858 ) Third-party certificate trust flags are reset after installing an external CA into IdM The ipa-ca-install --external-ca command, used to install an external certificate authority (CA) into an existing Identity Management (IdM) domain, generates a certificate signing request (CSR) that the user must submit to the external CA. When using a previously installed third-party certificate to sign the CSR, the third-party certificate trust flags in the NSS database are reset. Consequently, the certificate is no longer marked as trusted. In addition, checks performed by the mod_nss module fail, and the httpd service fails to start. The CA installation fails with the following message in this situation: As a workaround, after this message appears, reset the third-party certificate flags to their state and restart httpd . For example, if the ca1 certificate previously had the C,, trust flags: This restores the system to the correct state. (BZ# 1318616 ) realmd fails to remove the computer account from AD Red Hat Enterprise Linux uses Samba as default back end for Active Directory (AD) domain memberships. In this case, if you manually set a computer name using the --computer-name option with the realm join command, the account cannot be removed from AD when you leave the domain. To work around this problem, do not use the --computer-name option and instead add the computer name to the /etc/realmd.conf file. For example: With the workaround, the host is successfully joined to the domain and the account is automatically removed if you leave the domain using the realm leave --remove command. (BZ# 1370457 ) SSSD fails to manage autofs mappings from a LDAP tree Previously, the System Security Services Daemon (SSSD) implemented incorrect default values for autofs mappings when using the RFC2307 LDAP schema. A patch has been applied, which fixed the defaults to match the schema. However, users connecting to LDAP servers that contain mappings with the schema SSSD previously used, are not able to load the autofs attributes. Affected users see the following error reported in the /var/log/messages log file: To work around this problem, modify the /etc/sssd/sssd.conf file and set your domain to use the existing attribute mappings: As a result, SSSD is able to load autofs mappings from the attributes. (BZ# 1372814 ) The dependency list for pkispawn does not include openssl When the openssl package is not installed, using the pkispawn utility fails with this error: This problem occurs because the openssl package is not included as a runtime dependency of the pki-server package contained within the pki-core package. As a work around, install openssl before running pkispawn . (BZ#1376488) Enumerating a large number of users results in high CPU load and slows down other operations When enumerate=true is set in the etc/sssd/sssd.conf file and a large number of users (for example, 30,000 users) are present in the LDAP server, certain performance problems occur: the sssd_be process consumes almost 99% of CPU resources certain operations, such as logging in as a local user or logging out, take unexpectedly long to complete running the ldbsearch operation on the sysdb and timestamp caches fails with an error reporting that the indexed and full searches both failed Note that this is not a new known issue, as these problems occurred in releases of SSSD as well. (BZ# 888739 , BZ# 1379774 ) GDM fails to authenticate using a smart card When using smart card authentication, the System Security Services Daemon's (SSSD) PAM responder does not verify if the login name is a Kerberos user principal name (UPN). As a consequence, the gdm-password pluggable authentication module (PAM) shows the password prompt instead of the smart card PIN prompt when using a user principal name (UPN) as login name. As a result, smart card authenticating to the GNOME display manager (GDM) fails. (BZ# 1389796 ) The ipa passwd command fails when using uppercase or mixed case user names Identity Management (IdM) 4.4.0 introduced unified handling of user principals in all commands. However, some commands were not fully converted. As a consequence, the ipa passwd command fails when you use uppercase or mixed case letters in user names. To work around this issue, use only lower case letters in user names when using the ipa passwd command. (BZ# 1375133 ) The IdM web UI does not correctly recognize the status of a revoked certificate The Identity Management (IdM) web UI is currently unable to determine whether a certificate has been revoked. As a consequence: The Revoked sign is not displayed when viewing the certificate from the user, service, or host details page. The Revoke action is still available from the details page. Attempting to revoke an already revoked certificate results in an error dialog. The Remove Hold button is always disabled even if the certificate has been revoked because of Certificate Hold (revocation reason 6). (BZ# 1371479 ) SSSD only applies values in sudoUser attributes from AD in lower case Previously, when the System Security Services Daemon (SSSD) fetched sudo rules from Active Directory (AD), the sudoUser attribute must have match the exact case of the samAccountName attribute of the user the rule was assigned to. Due to a regression in Red Hat Enterprise Linux 7.3, the sudoUser attribute now only matches lower case values. To work around this problem, rename sudoUser attribute values to lower case. With the workaround, sudo rules are applied correctly. (BZ# 1380436 ) Updating the ipa-client and ipa-admintools packages can fail During the upgrade from Red Hat Enterprise Linux 7.2 to Red Hat Enterprise Linux 7.3, updating of the ipa-client and ipa-admintools packages can fail in some cases. To work around this problem, uninstall ipa-client and ipa-admintools prior to upgrading to Red Hat Enterprise Linux 7.3, and then install the new versions of these packages. (BZ#1390565) Upgrading SSSD sometimes causes the sssd process to be terminated When the sssd process performs an action for an unexpectedly long time, an internal watchdog process terminates it. However, the sssd process does not start again. This problem usually occurs during an attempt to upgrade SSSD on a slow system if the SSSD database contains a large number of entries. To work around this problem: 1. Make sure the central authentication server is available. This ensures that users will be able to authenticate after removing the SSSD cache in the step. 2. Remove the SSSD cache using the sss_cache utility before upgrading. A fix for this known issue will be available with the update. (BZ#1392441) Directory Server fails due to bind-dyndb-ldap schema errors The version of the bind-dyndb-ldap LDAP schema included in Identity Management contains syntax errors and is missing a description of one attribute. If the user uses this version of the schema, the Directory Server component fails to start. Consequently, error messages are logged in the journal, informing the user about the incorrect syntax. To work around this problem: Obtain a corrected schema file from the upstream git.fedorahosted.org repository: Copy the corrected schema file into the Directory Server's instance configuration folder. Restart Directory Server: (BZ#1413805)
[ "javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated.", "SecurityWarning: Certificate has no `subjectAltName`, falling back to check for a `commonName` for now. This feature is being removed by major browsers and deprecated by RFC 2818.", "[2/30]: configuring certificate server instance ipa.ipaserver.install.cainstance.CAInstance: CRITICAL Failed to configure CA instance: Command '/usr/sbin/pkispawn -s CA -f /tmp/tmpsDHYbO' returned non-zero exit status 1", "pk12util -d /etc/httpd/alias -k /etc/httpd/alias/pwdfile.txt -o temp.p12 -n 'ca1/replica'", "openssl pkcs12 -in temp.p12 -nocerts -nodes -out temp.key", "openssl pkcs12 -in temp.p12 -nokeys -clcerts -out temp.pem", "openssl pkcs12 -export -in temp.pem -inkey temp.key -out repl.p12 -name 'ca1/replica'", "pk12util -d /etc/dirsrv/slapd-EXAMPLE-COM -K '' -i repl.p12", "rm -f temp.p12 temp.key temp.pem repl.p12", "dn: cn=encryption,cn=config changetype: modify replace: nsSSL3 nsSSL3: off - replace: nsSSLClientAuth nsSSLClientAuth: allowed - replace: nsSSL3Ciphers nsSSL3Ciphers: default dn: cn=config changetype: modify replace: nsslapd-security nsslapd-security: on", "ldapmodify -H \"ldap://localhost\" -D \"cn=directory manager\" -f /tmp/enable_ssl.ldif -w dm_password", "dn: cn=RSA,cn=encryption,cn=config changetype: add objectclass: top objectclass: nsEncryptionModule cn: RSA nsSSLPersonalitySSL: ca1/replica nsSSLToken: internal (software) nsSSLActivation: on", "ldapadd -H \"ldap://localhost\" -D \"cn=directory manager\" -f /tmp/add_rsa.ldif -w dm_password", "rm -f /tmp/enable_ssl.ldif /tmp/add_rsa.ldif", "systemctl restart [email protected]", "CA failed to start after 300 seconds", "certutil -d /etc/httpd/alias -n 'ca1' -M -t C,, systemctl restart httpd.service", "[domain.example.com] computer-name = host_name", "Your configuration uses the autofs provider with schema set to rfc2307 and default attribute mappings. The default map has changed in this release, please make sure the configuration matches the server attributes.", "[domain/EXAMPLE] ldap_autofs_map_object_class = automountMap ldap_autofs_map_name = ou ldap_autofs_entry_object_class = automount ldap_autofs_entry_key = cn ldap_autofs_entry_value = automountInformation", "Installation failed: [Errno 2] No such file or directory", "wget https://git.fedorahosted.org/cgit/bind-dyndb-ldap.git/plain/doc/schema.ldif?id=17711141882aca3847a5daba2292bcbcc471ec63 -O /usr/share/doc/bind-dyndb-ldap-10.0/schema.ldif", "cp /usr/share/doc/bind-dyndb-ldap-10.0/schema.ldif /etc/dirsrv/slapd-[EXAMPLE-COM]/schema/[SCHEMA_FILE_NAME].ldif", "systemctl restart dirsrv.target" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/known_issues_authentication_and_interoperability
3.4. Search Result Type Options
3.4. Search Result Type Options The result type allows you to search for resources of any of the following types: Vms for a list of virtual machines Host for a list of hosts Pools for a list of pools Template for a list of templates Events for a list of events Users for a list of users Cluster for a list of clusters DataCenter for a list of data centers Storage for a list of storage domains As each type of resource has a unique set of properties and a set of other resource types that it is associated with, each search type has a set of valid syntax combinations. You can also use the auto-complete feature to create valid queries easily.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/search_result_type_options
Chapter 1. Getting started with Dev Spaces
Chapter 1. Getting started with Dev Spaces If your organization is already running a OpenShift Dev Spaces instance, you can get started as a new user by learning how to start a new workspace, manage your workspaces, and authenticate yourself to a Git server from a workspace: Section 1.1, "Starting a workspace from a Git repository URL" Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" Section 1.2, "Starting a workspace from a raw devfile URL" Section 1.3, "Basic actions you can perform on a workspace" Section 1.4, "Authenticating to a Git server from a workspace" Section 1.5, "Using the fuse-overlayfs storage driver for Podman and Buildah" 1.1. Starting a workspace from a Git repository URL With OpenShift Dev Spaces, you can use a URL in your browser to start a new workspace that contains a clone of a Git repository. This way, you can clone a Git repository that is hosted on GitHub, GitLab, Bitbucket or Microsoft Azure DevOps server instances. Tip You can also use the Git Repository URL field on the Create Workspace page of your OpenShift Dev Spaces dashboard to enter the URL of a Git repository to start a new workspace. Important If you use an SSH URL to start a new workspace, you must propagate the SSH key. See Configuring DevWorkspaces to use SSH keys for Git operations for more information. If the SSH URL points to a private repository, you must apply an access token to be able to fetch the devfile.yaml content. You can do this either by accepting an SCM authentication page or following a Personal Access Token procedure. Important Configure personal access token to access private repositories. See Section 6.1.2, "Using a Git-provider access token" . Prerequisites Your organization has a running instance of OpenShift Dev Spaces. You know the FQDN URL of your organization's OpenShift Dev Spaces instance: https:// <openshift_dev_spaces_fqdn> . Optional: You have authentication to the Git server configured. Your Git repository maintainer keeps the devfile.yaml or .devfile.yaml file in the root directory of the Git repository. (For alternative file names and file paths, see Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" .) Tip You can also start a new workspace by supplying the URL of a Git repository that contains no devfile. Doing so results in a workspace with Universal Developer Image and with Microsoft Visual Studio Code - Open Source as the workspace IDE. Procedure To start a new workspace with a clone of a Git repository: Optional: Visit your OpenShift Dev Spaces dashboard pages to authenticate to your organization's instance of OpenShift Dev Spaces. Visit the URL to start a new workspace using the basic syntax: Tip You can extend this URL with optional parameters: 1 See Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" . Tip You can use Git+SSH URLs to start a new workspace. See Configuring DevWorkspaces to use SSH keys for Git operations Example 1.1. A URL for starting a new workspace https:// <openshift_dev_spaces_fqdn> #https://github.com/che-samples/cpp-hello-world https:// <openshift_dev_spaces_fqdn> #[email protected]:che-samples/cpp-hello-world.git Example 1.2. The URL syntax for starting a new workspace with a clone of a GitHub instance repository https:// <openshift_dev_spaces_fqdn> #https:// <github_host> / <user_or_org> / <repository> starts a new workspace with a clone of the default branch. https:// <openshift_dev_spaces_fqdn> #https:// <github_host> / <user_or_org> / <repository> /tree/ <branch_name> starts a new workspace with a clone of the specified branch. https:// <openshift_dev_spaces_fqdn> #https:// <github_host> / <user_or_org> / <repository> /pull/ <pull_request_id> starts a new workspace with a clone of the branch of the pull request. https:// <openshift_dev_spaces_fqdn> #git@ <github_host> : <user_or_org> / <repository> .git starts a new workspace from Git+SSH URL. Example 1.3. The URL syntax for starting a new workspace with a clone of a GitLab instance repository https:// <openshift_dev_spaces_fqdn> #https:// <gitlab_host> / <user_or_org> / <repository> starts a new workspace with a clone of the default branch. https:// <openshift_dev_spaces_fqdn> #https:// <gitlab_host> / <user_or_org> / <repository> /-/tree/ <branch_name> starts a new workspace with a clone of the specified branch. https:// <openshift_dev_spaces_fqdn> #git@ <gitlab_host> : <user_or_org> / <repository> .git starts a new workspace from Git+SSH URL. Example 1.4. The URL syntax for starting a new workspace with a clone of a BitBucket Server repository https:// <openshift_dev_spaces_fqdn> #https:// <bb_host> /scm/ <project-key> / <repository> .git starts a new workspace with a clone of the default branch. https:// <openshift_dev_spaces_fqdn> #https:// <bb_host> /users/ <user_slug> /repos/ <repository> / starts a new workspace with a clone of the default branch, if a repository was created under the user profile. https:// <openshift_dev_spaces_fqdn> #https:// <bb_host> /users/ <user-slug> /repos/ <repository> /browse?at=refs%2Fheads%2F <branch-name> starts a new workspace with a clone of the specified branch. https:// <openshift_dev_spaces_fqdn> #git@ <bb_host> : <user_slug> / <repository> .git starts a new workspace from Git+SSH URL. Example 1.5. The URL syntax for starting a new workspace with a clone of a Microsoft Azure DevOps Git repository https:// <openshift_dev_spaces_fqdn> #https:// <organization> @dev.azure.com/ <organization> / <project> /_git/ <repository> starts a new workspace with a clone of the default branch. https:// <openshift_dev_spaces_fqdn> #https:// <organization> @dev.azure.com/ <organization> / <project> /_git/ <repository> ?version=GB <branch> starts a new workspace with a clone of the specific branch. https:// <openshift_dev_spaces_fqdn> #[email protected]:v3/ <organization> / <project> / <repository> starts a new workspace from Git+SSH URL. After you enter the URL to start a new workspace in a browser tab, the workspace starting page appears. When the new workspace is ready, the workspace IDE loads in the browser tab. A clone of the Git repository is present in the filesystem of the new workspace. The workspace has a unique URL: https:// <openshift_dev_spaces_fqdn> / <user_name> / <unique_url> . Additional resources Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" Section 1.3, "Basic actions you can perform on a workspace" Section 6.1.2, "Using a Git-provider access token" Section 6.2.1, "Mounting Git configuration" Configuring DevWorkspaces to use SSH keys for Git operations 1.1.1. Optional parameters for the URLs for starting a new workspace When you start a new workspace, OpenShift Dev Spaces configures the workspace according to the instructions in the devfile. When you use a URL to start a new workspace, you can append optional parameters to the URL that further configure the workspace. You can use these parameters to specify a workspace IDE, start duplicate workspaces, and specify a devfile file name or path. Section 1.1.1.1, "URL parameter concatenation" Section 1.1.1.2, "URL parameter for the IDE" Section 1.1.1.3, "URL parameter for the IDE image" Section 1.1.1.4, "URL parameter for starting duplicate workspaces" Section 1.1.1.5, "URL parameter for the devfile file name" Section 1.1.1.6, "URL parameter for the devfile file path" Section 1.1.1.7, "URL parameter for the workspace storage" Section 1.1.1.8, "URL parameter for additional remotes" Section 1.1.1.9, "URL parameter for a container image" 1.1.1.1. URL parameter concatenation The URL for starting a new workspace supports concatenation of multiple optional URL parameters by using & with the following URL syntax: https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ? <url_parameter_1> & <url_parameter_2> & <url_parameter_3> Example 1.6. A URL for starting a new workspace with the URL of a Git repository and optional URL parameters The complete URL for the browser: https:// <openshift_dev_spaces_fqdn> #https://github.com/che-samples/cpp-hello-world?new&che-editor=che-incubator/intellij-community/latest&devfilePath=tests/testdevfile.yaml Explanation of the parts of the URL: 1 OpenShift Dev Spaces URL. 2 The URL of the Git repository to be cloned into the new workspace. 3 The concatenated optional URL parameters. 1.1.1.2. URL parameter for the IDE You can use the che-editor= URL parameter to specify a supported IDE when starting a workspace. Tip Use the che-editor= parameter when you cannot add or edit a /.che/che-editor.yaml file in the source-code Git repository to be cloned for workspaces. Note The che-editor= parameter overrides the /.che/che-editor.yaml file. This parameter accepts two types of values: che-editor= <editor_key> Table 1.1. The URL parameter <editor_key> values for supported IDEs IDE <editor_key> value Note Microsoft Visual Studio Code - Open Source che-incubator/che-code/latest This is the default IDE that loads in a new workspace when the URL parameter or che-editor.yaml is not used. JetBrains IntelliJ IDEA Community Edition che-incubator/che-idea/latest Technology Preview . Use the Dashboard to select this IDE. che-editor= <url_to_a_file> 1 URL to a file with devfile content . Tip The URL must point to the raw file content. To use this parameter with a che-editor.yaml file, copy the file with another name or path, and remove the line with inline from the file. The che-editors.yaml file features the devfiles of all supported IDEs. 1.1.1.3. URL parameter for the IDE image You can use the editor-image parameter to set the custom IDE image for the workspace. Important If the Git repository contains /.che/che-editor.yaml file, the custom editor will be overridden with the new IDE image. If there is no /.che/che-editor.yaml file in the Git repository, the default editor will be overridden with the new IDE image. If you want to override the supported IDE and change the target editor image, you can use both parameters together: che-editor and editor-image URL parameters. The URL parameter to override the IDE image is editor-image= : Example: https:// <openshift_dev_spaces_fqdn> #https://github.com/eclipse-che/che-docs?editor-image=quay.io/che-incubator/che-code: or https:// <openshift_dev_spaces_fqdn> #https://github.com/eclipse-che/che-docs?che-editor=che-incubator/che-code/latest&editor-image=quay.io/che-incubator/che-code: 1.1.1.4. URL parameter for starting duplicate workspaces Visiting a URL for starting a new workspace results in a new workspace according to the devfile and with a clone of the linked Git repository. In some situations, you might need to have multiple workspaces that are duplicates in terms of the devfile and the linked Git repository. You can do this by visiting the same URL for starting a new workspace with a URL parameter. The URL parameter for starting a duplicate workspace is new : Note If you currently have a workspace that you started using a URL, then visiting the URL again without the new URL parameter results in an error message. 1.1.1.5. URL parameter for the devfile file name When you visit a URL for starting a new workspace, OpenShift Dev Spaces searches the linked Git repository for a devfile with the file name .devfile.yaml or devfile.yaml . The devfile in the linked Git repository must follow this file-naming convention. In some situations, you might need to specify a different, unconventional file name for the devfile. The URL parameter for specifying an unconventional file name of the devfile is df= <filename> .yaml : 1 <filename> .yaml is an unconventional file name of the devfile in the linked Git repository. Tip The df= <filename> .yaml parameter also has a long version: devfilePath= <filename> .yaml . 1.1.1.6. URL parameter for the devfile file path When you visit a URL for starting a new workspace, OpenShift Dev Spaces searches the root directory of the linked Git repository for a devfile with the file name .devfile.yaml or devfile.yaml . The file path of the devfile in the linked Git repository must follow this path convention. In some situations, you might need to specify a different, unconventional file path for the devfile in the linked Git repository. The URL parameter for specifying an unconventional file path of the devfile is devfilePath= <relative_file_path> : 1 <relative_file_path> is an unconventional file path of the devfile in the linked Git repository. 1.1.1.7. URL parameter for the workspace storage If the URL for starting a new workspace does not contain a URL parameter specifying the storage type, the new workspace is created in ephemeral or persistent storage, whichever is defined as the default storage type in the CheCluster Custom Resource. The URL parameter for specifying a storage type for a workspace is storageType= <storage_type> : 1 Possible <storage_type> values: ephemeral per-user (persistent) per-workspace (persistent) Tip With the ephemeral or per-workspace storage type, you can run multiple workspaces concurrently, which is not possible with the default per-user storage type. Additional resources Chapter 7, Requesting persistent storage for workspaces 1.1.1.8. URL parameter for additional remotes When you visit a URL for starting a new workspace, OpenShift Dev Spaces configures the origin remote to be the Git repository that you specified with # after the FQDN URL of your organization's OpenShift Dev Spaces instance. The URL parameter for cloning and configuring additional remotes for the workspace is remotes= : Important If you do not enter the name origin for any of the additional remotes, the remote from <git_repository_url> will be cloned and named origin by default, and its expected branch will be checked out automatically. If you enter the name origin for one of the additional remotes, its default branch will be checked out automatically, but the remote from <git_repository_url> will NOT be cloned for the workspace. 1.1.1.9. URL parameter for a container image You can use the image parameter to use a custom reference to a container image in the following scenarios: The Git repository contains no devfile, and you want to start a new workspace with the custom image. The Git repository contains a devfile, and you want to override the first container image listed in the components section of the devfile. The URL parameter for the path to the container image is image= : Example https:// <openshift_dev_spaces_fqdn> #https://github.com/eclipse-che/che-docs?image=quay.io/devfile/universal-developer-image:ubi8-latest 1.2. Starting a workspace from a raw devfile URL With OpenShift Dev Spaces, you can open a devfile URL in your browser to start a new workspace. Tip You can use the Git Repo URL field on the Create Workspace page of your OpenShift Dev Spaces dashboard to enter the URL of a devfile to start a new workspace. Important To initiate a clone of the Git repository in the filesystem of a new workspace, the devfile must contain project info. See https://devfile.io/docs/2.2.0/adding-projects . Prerequisites Your organization has a running instance of OpenShift Dev Spaces. You know the FQDN URL of your organization's OpenShift Dev Spaces instance: https:// <openshift_dev_spaces_fqdn> . Procedure To start a new workspace from a devfile URL: Optional: Visit your OpenShift Dev Spaces dashboard pages to authenticate to your organization's instance of OpenShift Dev Spaces. Visit the URL to start a new workspace from a public repository using the basic syntax: You can pass your personal access token to the URL to access a devfile from private repositories: 1 Your personal access token that you generated on the Git provider's website. This works for GitHub, GitLab, Bitbucket, Microsoft Azure, and other providers that support Personal Access Token. Important Automated Git credential injection does not work in this case. To configure the Git credentials, use the configure personal access token guide. Tip You can extend this URL with optional parameters: 1 See Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" . Example 1.7. A URL for starting a new workspace from a public repository https:// <openshift_dev_spaces_fqdn> #https://raw.githubusercontent.com/che-samples/cpp-hello-world/main/devfile.yaml Example 1.8. A URL for starting a new workspace from a private repository https:// <openshift_dev_spaces_fqdn> #https:// <token> @raw.githubusercontent.com/che-samples/cpp-hello-world/main/devfile.yaml Verification After you enter the URL to start a new workspace in a browser tab, the workspace starting page appears. When the new workspace is ready, the workspace IDE loads in the browser tab. The workspace has a unique URL: https:// <openshift_dev_spaces_fqdn> / <user_name> / <unique_url> . Additional resources Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" Section 1.3, "Basic actions you can perform on a workspace" Section 6.1.2, "Using a Git-provider access token" Section 6.2.1, "Mounting Git configuration" Configuring DevWorkspaces to use SSH keys for Git operations 1.3. Basic actions you can perform on a workspace You manage your workspaces and verify their current states in the Workspaces page ( https:// <openshift_dev_spaces_fqdn> /dashboard/#/workspaces ) of your OpenShift Dev Spaces dashboard. After you start a new workspace, you can perform the following actions on it in the Workspaces page: Table 1.2. Basic actions you can perform on a workspace Action GUI steps in the Workspaces page Reopen a running workspace Click Open . Restart a running workspace Go to ... > Restart Workspace . Stop a running workspace Go to ... > Stop Workspace . Start a stopped workspace Click Open . Delete a workspace Go to ... > Delete Workspace . 1.4. Authenticating to a Git server from a workspace In a workspace, you can run Git commands that require user authentication like cloning a remote private Git repository or pushing to a remote public or private Git repository. User authentication to a Git server from a workspace is configured by the administrator or, in some cases, by the individual user: Your administrator sets up an OAuth application on GitHub, GitLab, Bitbucket, or Microsoft Azure Repos for your organization's Red Hat OpenShift Dev Spaces instance. As a workaround, some users create and apply their own Kubernetes Secrets for their personal Git-provider access tokens or configure SSH keys for Git operations . Additional resources Administration Guide: Configuring OAuth for Git providers User Guide: Using a Git-provider access token Configuring DevWorkspaces to use SSH keys for Git operations 1.5. Using the fuse-overlayfs storage driver for Podman and Buildah By default, newly created workspaces that do not specify a devfile will use the Universal Developer Image (UDI). The UDI contains common development tools and dependencies commonly used by developers. Podman and Buildah are included in the UDI, allowing developers to build and push container images from their workspace. By default, Podman and Buildah in the UDI are configured to use the vfs storage driver. For more efficient image management, use the fuse-overlayfs storage driver which supports copy-on-write in rootless environments. You must meet the following requirements to use fuse-overlayfs in a workspace: For OpenShift versions older than 4.15, the administrator has enabled /dev/fuse access on the cluster by following https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/administration_guide/index#administration-guide:configuring-fuse . The workspace has the necessary annotations for using the /dev/fuse device. See Section 1.5.1, "Accessing /dev/fuse" . The storage.conf file in the workspace container has been configured to use fuse-overlayfs. See Section 1.5.2, "Enabling fuse-overlayfs with a ConfigMap" . Additional resources Universal Developer Image 1.5.1. Accessing /dev/fuse You must have access to /dev/fuse to use fuse-overlayfs. This section describes how to make /dev/fuse accessible to workspace containers. Prerequisites For OpenShift versions older than 4.15, the administrator has enabled access to /dev/fuse by following https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/administration_guide/index#administration-guide:configuring-fuse . Determine a workspace to use fuse-overlayfs with. Procedure Use the pod-overrides attribute to add the required annotations defined in https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/administration_guide/index#administration-guide:configuring-fuse to the workspace. The pod-overrides attribute allows merging certain fields in the workspace pod's spec . For OpenShift versions older than 4.15: For OpenShift version 4.15 and later: Verification steps Start the workspace and verify that /dev/fuse is available in the workspace container. After completing this procedure, follow the steps in Section 1.5.2, "Enabling fuse-overlayfs with a ConfigMap" to use fuse-overlayfs for Podman. 1.5.2. Enabling fuse-overlayfs with a ConfigMap You can define the storage driver for Podman and Buildah in the ~/.config/containers/storage.conf file. Here are the default contents of the /home/user/.config/containers/storage.conf file in the UDI container: storage.conf To use fuse-overlayfs, storage.conf can be set to the following: storage.conf 1 The absolute path to the fuse-overlayfs binary. The /usr/bin/fuse-overlayfs path is the default for the UDI. You can do this manually after starting a workspace. Another option is to build a new image based on the UDI with changes to storage.conf and use the new image for workspaces. Otherwise, you can update the /home/user/.config/containers/storage.conf for all workspaces in your project by creating a ConfigMap that mounts the updated file. See Section 6.2, "Mounting ConfigMaps" . Prerequisites For OpenShift versions older than 4.15, the administrator has enabled access to /dev/fuse by following https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/administration_guide/index#administration-guide:configuring-fuse . A workspace with the required annotations are set by following Section 1.5.1, "Accessing /dev/fuse" Note Since ConfigMaps mounted by following this guide mounts the ConfigMap's data to all workspaces, following this procedure will set the storage driver to fuse-overlayfs for all workspaces. Ensure that your workspaces contain the required annotations to use fuse-overlayfs by following Section 1.5.1, "Accessing /dev/fuse" . Procedure Apply a ConfigMap that mounts a /home/user/.config/containers/storage.conf file in your project. kind: ConfigMap apiVersion: v1 metadata: name: fuse-overlay labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.config/containers data: storage.conf: | [storage] driver = "overlay" [storage.options.overlay] mount_program="/usr/bin/fuse-overlayfs" Warning Creating this ConfigMap will cause all of your running workspaces to restart. Verification steps Start the workspace containing the required annotations and verify that the storage driver is overlay . Example output: Note The following error might occur for existing workspaces: In this case, delete the libpod local files as mentioned in the error message. 1.6. Running containers with kubedock Kubedock is a minimal container engine implementation that gives you a Podman-/docker-like experience inside a OpenShift Dev Spaces workspace. Kubedock is especially useful when dealing with ad-hoc, ephemeral, and testing containers, such as in the use cases listed below: Executing application tests which rely on Testcontainers framework. Using Quarkus Dev Services . Running a container stored in remote container registry, for local development purposes Important The image you want to use with kubedock must be compliant with Openshift Container Platform guidelines . Otherwise, running the image with kubedock will result in a failure even if the same image runs locally without issues. Enabling kubedock After enabling the kubedock environment variable, kubedock will run the following podman commands: podman run podman ps podman exec podman cp podman logs podman inspect podman kill podman rm podman wait podman stop podman start Other commands such as podman build are started by the local Podman. Important Using podman commands with kubedock has following limitations The podman build -t <image> . && podman run <image> command will fail. Use podman build -t <image> . && podman push <image> && podman run <image> instead. The podman generate kube command is not supported. --env option causes the podman run command to fail. Prerequisites An image compliant with Openshift Container Platform guidelines . Process Add KUBEDOCK_ENABLED=true environment variable to the devfile. (OPTIONAL) Use the KUBEDOCK_PARAM variable to specify additional kubedock parameters. The list of variables is available here . Alternatively, you can use the following command to view the available options: Example schemaVersion: 2.2.0 metadata: name: kubedock-sample-devfile components: - name: tools container: image: quay.io/devfile/universal-developer-image:latest memoryLimit: 8Gi memoryRequest: 1Gi cpuLimit: "2" cpuRequest: 200m env: - name: KUBEDOCK_PARAMS value: "--reverse-proxy --kubeconfig /home/user/.kube/config --initimage quay.io/agiertli/kubedock:0.13.0" - name: USE_JAVA17 value: "true" - value: /home/jboss/.m2 name: MAVEN_CONFIG - value: -Xmx4G -Xss128M -XX:MetaspaceSize=1G -XX:MaxMetaspaceSize=2G name: MAVEN_OPTS - name: KUBEDOCK_ENABLED value: 'true' - name: DOCKER_HOST value: 'tcp://127.0.0.1:2475' - name: TESTCONTAINERS_RYUK_DISABLED value: 'true' - name: TESTCONTAINERS_CHECKS_DISABLE value: 'true' endpoints: - exposure: none name: kubedock protocol: tcp targetPort: 2475 - exposure: public name: http-booster protocol: http targetPort: 8080 attributes: discoverable: true urlRewriteSupported: true - exposure: internal name: debug protocol: http targetPort: 5005 volumeMounts: - name: m2 path: /home/user/.m2 - name: m2 volume: size: 10G Important You must configure the Podman or docker API to point to kubedock setting CONTAINER_HOST=tcp://127.0.0.1:2475 or DOCKER_HOST=tcp://127.0.0.1:2475 when running containers. At the same time, you must configure Podman to point to local Podman when building the container.
[ "https:// <openshift_dev_spaces_fqdn> # <git_repository_url>", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ? <optional_parameters> 1", "https:// <openshift_dev_spaces_fqdn> 1 #https://github.com/che-samples/cpp-hello-world 2 ?new&che-editor=che-incubator/intellij-community/latest&devfilePath=tests/testdevfile.yaml 3", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?che-editor= <editor_key>", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?che-editor= <url_to_a_file> 1", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?editor-image= <container_registry/image_name:image_tag>", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?new", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?df= <filename> .yaml 1", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?devfilePath= <relative_file_path> 1", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?storageType= <storage_type> 1", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?remotes={{ <name_1> , <url_1> },{ <name_2> , <url_2> },{ <name_3> , <url_3> },...}", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?image= <container_image_url>", "https:// <openshift_dev_spaces_fqdn> # <devfile_url>", "https:// <openshift_dev_spaces_fqdn> # https:// <token> @ <host> / <path_to_devfile> 1", "https:// <openshift_dev_spaces_fqdn> # <devfile_url> ? <optional_parameters> 1", "oc patch devworkspace <DevWorkspace_name> --patch '{\"spec\":{\"template\":{\"attributes\":{\"pod-overrides\":{\"metadata\":{\"annotations\":{\"io.kubernetes.cri-o.Devices\":\"/dev/fuse\",\"io.openshift.podman-fuse\":\"\"}}}}}}}' --type=merge", "oc patch devworkspace <DevWorkspace_name> --patch '{\"spec\":{\"template\":{\"attributes\":{\"pod-overrides\":{\"metadata\":{\"annotations\":{\"io.kubernetes.cri-o.Devices\":\"/dev/fuse\"}}}}}}}' --type=merge", "stat /dev/fuse", "[storage] driver = \"vfs\"", "[storage] driver = \"overlay\" [storage.options.overlay] mount_program=\"/usr/bin/fuse-overlayfs\" 1", "kind: ConfigMap apiVersion: v1 metadata: name: fuse-overlay labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.config/containers data: storage.conf: | [storage] driver = \"overlay\" [storage.options.overlay] mount_program=\"/usr/bin/fuse-overlayfs\"", "podman info | grep overlay", "graphDriverName: overlay overlay.mount_program: Executable: /usr/bin/fuse-overlayfs Package: fuse-overlayfs-1.12-1.module+el8.9.0+20326+387084d0.x86_64 fuse-overlayfs: version 1.12 Backing Filesystem: overlayfs", "ERRO[0000] User-selected graph driver \"overlay\" overwritten by graph driver \"vfs\" from database - delete libpod local files (\"/home/user/.local/share/containers/storage\") to resolve. May prevent use of images created by other tools", "kubedock server --help", "schemaVersion: 2.2.0 metadata: name: kubedock-sample-devfile components: - name: tools container: image: quay.io/devfile/universal-developer-image:latest memoryLimit: 8Gi memoryRequest: 1Gi cpuLimit: \"2\" cpuRequest: 200m env: - name: KUBEDOCK_PARAMS value: \"--reverse-proxy --kubeconfig /home/user/.kube/config --initimage quay.io/agiertli/kubedock:0.13.0\" - name: USE_JAVA17 value: \"true\" - value: /home/jboss/.m2 name: MAVEN_CONFIG - value: -Xmx4G -Xss128M -XX:MetaspaceSize=1G -XX:MaxMetaspaceSize=2G name: MAVEN_OPTS - name: KUBEDOCK_ENABLED value: 'true' - name: DOCKER_HOST value: 'tcp://127.0.0.1:2475' - name: TESTCONTAINERS_RYUK_DISABLED value: 'true' - name: TESTCONTAINERS_CHECKS_DISABLE value: 'true' endpoints: - exposure: none name: kubedock protocol: tcp targetPort: 2475 - exposure: public name: http-booster protocol: http targetPort: 8080 attributes: discoverable: true urlRewriteSupported: true - exposure: internal name: debug protocol: http targetPort: 5005 volumeMounts: - name: m2 path: /home/user/.m2 - name: m2 volume: size: 10G" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/user_guide/getting-started-with-devspaces
4.5.4. Configuring Redundant Ring Protocol
4.5.4. Configuring Redundant Ring Protocol As of Red Hat Enterprise Linux 6.4, the Red Hat High Availability Add-On supports the configuration of redundant ring protocol. When using redundant ring protocol, there are a variety of considerations you must take into account, as described in Section 8.6, "Configuring Redundant Ring Protocol" . Clicking on the Redundant Ring tab displays the Redundant Ring Protocol Configuration page. This page displays all of the nodes that are currently configured for the cluster. If you are configuring a system to use redundant ring protocol, you must specify the Alternate Name for each node for the second ring. The Redundant Ring Protocol Configuration page optionally allows you to specify the Alternate Ring Multicast Address , the Alternate Ring CMAN Port , and the Alternate Ring Multicast Packet TTL for the second ring. If you specify a multicast address for the second ring, either the alternate multicast address or the alternate port must be different from the multicast address for the first ring. If you specify an alternate port, the port numbers of the first ring and the second ring must differ by at least two, since the system itself uses port and port-1 to perform operations. If you do not specify an alternate multicast address, the system will automatically use a different multicast address for the second ring.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-rrp-conga-ca
Chapter 11. Management of Ceph OSDs on the dashboard
Chapter 11. Management of Ceph OSDs on the dashboard As a storage administrator, you can monitor and manage OSDs on the Red Hat Ceph Storage Dashboard. Some of the capabilities of the Red Hat Ceph Storage Dashboard are: List OSDs, their status, statistics, information such as attributes, metadata, device health, performance counters and performance details. Mark OSDs down, in, out, lost, purge, reweight, scrub, deep-scrub, destroy, delete, and select profiles to adjust backfilling activity. List all drives associated with an OSD. Set and change the device class of an OSD. Deploy OSDs on new drives and hosts. 11.1. Prerequisites A running Red Hat Ceph Storage cluster cluster-manager level of access on the Red Hat Ceph Storage dashboard 11.2. Managing the OSDs on the Ceph dashboard You can carry out the following actions on a Ceph OSD on the Red Hat Ceph Storage Dashboard: Create a new OSD. Edit the device class of the OSD. Mark the Flags as No Up , No Down , No In , or No Out . Scrub and deep-scrub the OSDs. Reweight the OSDs. Mark the OSDs Out , In , Down , or Lost . Purge the OSDs. Destroy the OSDs. Delete the OSDs. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Hosts, Monitors and Manager Daemons are added to the storage cluster. Procedure Log in to the Dashboard. From the Cluster drop-down menu, select OSDs . Creating an OSD To create the OSD, click Create . Figure 11.1. Add device for OSDs Note Ensure you have an available host and a few available devices. You can check for available devices in Physical Disks under the Cluster drop-down menu. In the Create OSDs window, from Deployment Options, select one of the below options: Cost/Capacity-optimized : The cluster gets deployed with all available HDDs. Throughput-optimized : Slower devices are used to store data and faster devices are used to store journals/WALs. IOPS-optmized : All the available NVMEs are used to deploy OSDs. From the Advanced Mode, you can add primary, WAL and DB devices by clicking +Add . Primary devices : Primary storage devices contain all OSD data. WAL devices : Write-Ahead-Log devices are used for BlueStore's internal journal and are used only if the WAL device is faster than the primary device. For example, NVMEs or SSDs. DB devices : DB devices are used to store BlueStore's internal metadata and are used only if the DB device is faster than the primary device. For example, NVMEs or SSDs ). If you want to encrypt your data for security purposes, under Features , select encryption . Click the Preview button and in the OSD Creation Preview dialog box, Click Create . In the OSD Creation Preview dialog box, Click Create . You get a notification that the OSD was created successfully. The OSD status changes from in and down to in and up . Editing an OSD To edit an OSD, select the row. From Edit drop-down menu, select Edit . Edit the device class. Click Edit OSD . Figure 11.2. Edit an OSD You get a notification that the OSD was updated successfully. Marking the Flags of OSDs To mark the flag of the OSD, select the row. From Edit drop-down menu, select Flags . Mark the Flags with No Up , No Down , No In , or No Out . Click Update . Figure 11.3. Marking Flags of an OSD You get a notification that the flags of the OSD was updated successfully. Scrubbing the OSDs To scrub the OSD, select the row. From Edit drop-down menu, select Scrub . In the OSDs Scrub dialog box, click Update . Figure 11.4. Scrubbing an OSD You get a notification that the scrubbing of the OSD was initiated successfully. Deep-scrubbing the OSDs To deep-scrub the OSD, select the row. From Edit drop-down menu, select Deep scrub . In the OSDs Deep Scrub dialog box, click Update . Figure 11.5. Deep-scrubbing an OSD You get a notification that the deep scrubbing of the OSD was initiated successfully. Reweighting the OSDs To reweight the OSD, select the row. From Edit drop-down menu, select Reweight . In the Reweight OSD dialog box, enter a value between zero and one. Click Reweight . Figure 11.6. Reweighting an OSD Marking OSDs Out To mark the OSD out, select the row. From Edit drop-down menu, select Mark Out . In the Mark OSD out dialog box, click Mark Out . Figure 11.7. Marking OSDs out The status of the OSD will change to out . Marking OSDs In To mark the OSD in, select the OSD row that is in out status. From Edit drop-down menu, select Mark In . In the Mark OSD in dialog box, click Mark In . Figure 11.8. Marking OSDs in The status of the OSD will change to in . Marking OSDs Down To mark the OSD down, select the row. From Edit drop-down menu, select Mark Down . In the Mark OSD down dialog box, click Mark Down . Figure 11.9. Marking OSDs down The status of the OSD will change to down . Marking OSDs Lost To mark the OSD lost, select the OSD in out and down status. From Edit drop-down menu, select Mark Lost . In the Mark OSD Lost dialog box, check Yes, I am sure option, and click Mark Lost . Figure 11.10. Marking OSDs Lost Purging OSDs To purge the OSD, select the OSD in down status. From Edit drop-down menu, select Purge . In the Purge OSDs dialog box, check Yes, I am sure option, and click Purge OSD . Figure 11.11. Purging OSDs All the flags are reset and the OSD is back in in and up status. Destroying OSDs To destroy the OSD, select the OSD in down status. From Edit drop-down menu, select Destroy . In the Destroy OSDs dialog box, check Yes, I am sure option, and click Destroy OSD . Figure 11.12. Destroying OSDs The status of the OSD changes to destroyed . Deleting OSDs To delete the OSD, select the OSD in down status. From Edit drop-down menu, select Delete . In the Destroy OSDs dialog box, check Yes, I am sure option, and click Delete OSD . Note You can preserve the OSD_ID when you have to to replace the failed OSD. Figure 11.13. Deleting OSDs 11.3. Replacing the failed OSDs on the Ceph dashboard You can replace the failed OSDs in a Red Hat Ceph Storage cluster with the cluster-manager level of access on the dashboard. One of the highlights of this feature on the dashboard is that the OSD IDs can be preserved while replacing the failed OSDs. Prerequisites A running Red Hat Ceph Storage cluster. At least cluster-manager level of access to the Ceph Dashboard. At least one of the OSDs is down Procedure On the dashboard, you can identify the failed OSDs in the following ways: Dashboard AlertManager pop-up notifications. Dashboard landing page showing HEALTH_WARN status. Dashboard landing page showing failed OSDs. Dashboard OSD page showing failed OSDs. In this example, you can see that one of the OSDs is down on the landing page of the dashboard. Apart from this, on the physical drive, you can view the LED lights blinking if one of the OSDs is down. Click OSDs . Select the out and down OSD: From the Edit drop-down menu, select Flags and select No Up and click Update . From the Edit drop-down menu, select Delete . In the Delete OSD dialog box, select the Preserve OSD ID(s) for replacement and Yes, I am sure check boxes. Click Delete OSD . Wait till the status of the OSD changes to out and destroyed status. Optional: If you want to change the No Up Flag for the entire cluster, in the Cluster-wide configuration drop-down menu, select Flags . In Cluster-wide OSDs Flags dialog box, select No Up and click Update. Optional: If the OSDs are down due to a hard disk failure, replace the physical drive: If the drive is hot-swappable, replace the failed drive with a new one. If the drive is not hot-swappable and the host contains multiple OSDs, you might have to shut down the whole host and replace the physical drive. Consider preventing the cluster from backfilling. See the Stopping and Starting Rebalancing chapter in the Red Hat Ceph Storage Troubleshooting Guide for details. When the drive appears under the /dev/ directory, make a note of the drive path. If you want to add the OSD manually, find the OSD drive and format the disk. If the new disk has data, zap the disk: Syntax Example From the Create drop-down menu, select Create . In the Create OSDs window, click +Add for Primary devices. In the Primary devices dialog box, from the Hostname drop-down list, select any one filter. From Any drop-down list, select the respective option. Note You have to select the Hostname first and then at least one filter to add the devices. For example, from Hostname list, select Type and from Any list select hdd . Select Vendor and from Any list, select ATA Click Add . In the Create OSDs window , click the Preview button. In the OSD Creation Preview dialog box, Click Create . You will get a notification that the OSD is created. The OSD will be in out and down status. Select the newly created OSD that has out and down status. In the Edit drop-down menu, select Mark-in . In the Mark OSD in window, select Mark in . In the Edit drop-down menu, select Flags . Uncheck No Up and click Update . Optional: If you have changed the No Up Flag before for cluster-wide configuration, in the Cluster-wide configuration menu, select Flags . In Cluster-wide OSDs Flags dialog box, uncheck No Up and click Update . Verification Verify that the OSD that was destroyed is created on the device and the OSD ID is preserved. Additional Resources For more information on Down OSDs, see the Down OSDs section in the Red Hat Ceph Storage Troubleshooting Guide . For additional assistance see the Red Hat Support for service section in the Red Hat Ceph Storage Troubleshooting Guide . For more information on system roles, see the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide .
[ "ceph orch device zap HOST_NAME PATH --force", "ceph orch device zap ceph-adm2 /dev/sdc --force" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/dashboard_guide/management-of-ceph-osds-on-the-dashboard
Chapter 6. Release Information
Chapter 6. Release Information These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat Virtualization. Notes for updates released during the support lifecycle of this Red Hat Virtualization release will appear in the advisory text associated with each update or in the Red Hat Virtualization Technical Notes . This document is available on the Red Hat documentation page . 6.1. Red Hat Virtualization 4.4 SP 1 Batch Update 3 (ovirt-4.5.3) 6.1.1. Bug Fix These bugs were fixed in this release of Red Hat Virtualization: BZ# 1705338 Previously, stale data sometimes appeared in the DB "unregistered_ovf_of_entities" DB table. As a result, when importing a floating Storage Domain with a VM and disks from a source RHV to destination RHV. After importing the floating Storage Domain back into the source RHV, the VM is listed under the "VM Import" tab, but can't be imported because all its disks are now located on another Storage Domain (the destination RHV). In addition, after the first OVF update, the OVF of the VM reappears on the floating Storage Domain as a "ghost" OVF. In this release, after the floating Storage Domain is re-attached in the source RHV, the VM does not appear under the "VM Import" tab and no "ghost" OVF is re-created after the OVF update, and the DB table is filled correctly during Storage Domain attachment. This ensures that the "unregistered_ovf_of_entities" DB table contains the most up-to-date data, and no irrelevant entries are present. BZ# 1968433 Previously, attempts to start highly available virtual machines during failover or failback flows sometimes failed with an error "Cannot run VM. VM X is being imported", resulting in the virtual machines staying down. In this release, virtual machines are no longer started by the disaster-recovery scripts while being imported. BZ# 1974535 Previously, highly available VMs with a VM lease running on a primary site may not have started on a secondary site during active-passive failover because none of the hosts were set as ready to support VM leases. In this release, when a highly available VM with a VM lease fails to start because hosts were filtered out due to not being ready to support VM leases, it keeps trying to start periodically. If it takes time for the engine to discover that the storage domain that contains the VM lease is ready, the attempts to start the VM will continue until the status of the storage domain changes. BZ# 1983567 There may be stale data in some DB tables, resulting in missing disks after importing a VM (after Storage Domain was imported from a source RHV to destination RHV, and the VM was imported too). Bug fixes BZ#1910858 and BZ#1705338 solved similar issues, and since this bug is hard to reproduce, it may have been fixed by these 2 fixes. In this release, everything works, the VM is imported with all the attached disks. BZ# 2094576 Previously, small qcow2 volumes in block storage were allocated 2.5 GiB (chunk size), without considering the requested capacity. As a result, there was wasted space with volumes allocated beyond their capacity. In this release, volumes with capacity smaller than one chunk use their capacity for the initial size (rounded to the extent). For example, for capacities smaller than one extent (128 Mib), this results in 128 MiB allocated as their initial size. BZ# 2123141 In this release, image transfers cannot move from the final state (finished successfully or finished with failure) back to the non-final state which could lead to hanging image transfers that block moving hosts to maintenance. BZ# 2125290 Previously, an LVM device file was not created if no LVM devices were found during VDSM configuration. As a result, all LVM commands worked on VGs belonging to RHV storage domains. In this release, the vdsm-tool creates a devices file even when no LVM devices are found, and Storage Domain VGs are not seen by LVM commands. BZ# 2125658 Previously, static IPv6 interface configuration in the ifcfg file during Self-Hosted Engine setup did not include the IPV6_AUTOCONF=no setting. As a result, in NetworkManager the configuration of the property ipv6.method remained 'auto' instead of 'manual' on the interface and the interface connection was intermittent causing a loss of connectivity with the Manager. In this release, during Self-Hosted Engine deployment, the interfaces are also configured with IPV6_AUTOCONF=no, and the connection is truly static and unaffected by dynamic changes in the network. BZ# 2137532 Previously, the Memory Overcommitment Manager (MoM) sometimes experienced an error on startup, resulting in the MoM not working and reporting error messages with tracebacks in the logs. In this release, the MoM works properly. 6.1.2. Enhancements This release of Red Hat Virtualization features the following enhancements: BZ# 1886211 In this release, during a restore operation, the snapshot is locked. In addition, a notification is now displayed following a successful snapshot restore. 6.1.3. Release Notes This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment. BZ# 2130700 Incremental backup or Changed Block Tracking (CBT) is now generally available. BZ# 2132386 RHV 4.4 SP1 is only supported on RHEL 8.6 EUS. When performing RHV Manager or hypervisor installation, the RHEL version must be updated to RHEL 8.6 and the subscription channels must be updated to RHEL 8.6 EUS (when they are available). 6.1.4. Known Issues These known issues exist in Red Hat Virtualization at this time: BZ# 1952078 When migrating virtual machines from hosts that have not been upgraded to hosts that have been upgraded, and migration encryption is enabled, the migration might fail due to a missing migration client certificate. Workaround: Place the migration origin host (that has not been upgraded) in Maintenance mode before proceeding with migration.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/chap-release_notes
Chapter 22. Configuring Batch Applications
Chapter 22. Configuring Batch Applications JBoss EAP 7 supports Jakarta Batch . You can configure an environment for running batch applications and manage batch jobs using the batch-jberet subsystem. For information on developing batch applications, see Jakarta Batch Application Development in the JBoss EAP Development Guide . 22.1. Configuring Batch Jobs You can configure settings for batch jobs using the batch-jberet subsystem, which is based on the JBeret implementation. The default batch-jberet subsystem configuration defines an in-memory job repository and default thread pool settings. <subsystem xmlns="urn:jboss:domain:batch-jberet:2.0"> <default-job-repository name="in-memory"/> <default-thread-pool name="batch"/> <job-repository name="in-memory"> <in-memory/> </job-repository> <thread-pool name="batch"> <max-threads count="10"/> <keepalive-time time="30" unit="seconds"/> </thread-pool> </subsystem> By default, any batch jobs stopped during a server suspend will be restarted upon server resume. You can set the restart-jobs-on-resume property to false to leave jobs in the STOPPED state instead. You can also configure the settings for batch job repositories and thread pools . 22.1.1. Configure Batch Job Repositories This section shows you how to configure in-memory and JDBC job repositories for storing batch job information using the management CLI. You can also configure job repositories using the management console by navigating to Configuration Subsystems Batch (JBeret) , clicking View , and selecting either In Memory or JDBC from the left-hand menu. Add an In-memory Job Repository You can add a job repository that stores batch job information in memory. Add a JDBC Job Repository You can add a job repository that stores batch job information in a database. You must specify the name of the datasource for connecting to the database. Set a Default Job Repository You can set an in-memory or JDBC job repository as the default job repository for batch applications. This will require a server reload. 22.1.2. Configure Batch Thread Pools This section shows you how to configure thread pools and thread factories to be used for batch jobs using the management CLI. You can also configure thread pools and thread factories using the management console by navigating to Configuration Subsystems Batch (JBeret) , clicking View , and selecting either Thread Factory or Thread Pool from the left-hand menu. Configure a Thread Pool When adding a thread pool, you must specify the max-threads , which should always be greater than 3 as two threads are reserved to ensure partition jobs can execute as expected. Add a thread pool. If desired, set a keepalive-time value. Use a Thread Factory Add a thread factory. Configure the desired attributes for the thread factory. group-name - The name of a thread group to create for this thread factory. priority - The thread priority of created threads. thread-name-pattern - The template used to create names for threads. The following patterns may be used: %% - A percent sign %t - The per-factory thread sequence number %g - The global thread sequence number %f - The factory sequence number %i - The thread ID Assign the thread factory to a thread pool. This will require a server reload. Set a Default Thread Pool You can set a different thread pool as the default thread pool. This will require a server reload. View Thread Pool Statistics You can view runtime information about a batch thread pool using the read-resource management CLI operation. You must use the include-runtime=true parameter in order to see this runtime information. You can also view runtime information for batch thread pools using the management console by navigating to the Batch subsystem from the Runtime tab. 22.2. Managing Batch Jobs The batch-jberet subsystem resource for deployments allows you to start, stop, restart, and view execution details for batch jobs. Batch jobs can be managed from the management CLI or the management console . Manage Batch Jobs from the Management CLI Restart a Batch Job You can restart a job that is in a STOPPED or FAILED state by providing its execution ID and optionally any properties to use when restarting the batch job. The execution ID must be the most recent execution of the job instance. Start a Batch Job You can start a batch job by providing the job XML file and optionally any properties to use when starting the batch job. Stop a Batch Job You can stop a running batch job by providing its execution ID. View Batch Job Execution Details You can view the details of batch job executions. You must use the include-runtime=true parameter on the read-resource operation in order to see this runtime information. Manage Batch Jobs from the Management Console To manage batch jobs from the management console, navigate to the Runtime tab, select the server, select Batch (JBeret) , and choose the job from the list. Restart a Batch Job Restart a STOPPED job by selecting the execution and clicking Restart . Start a Batch Job Start a new execution of a batch job by selecting the job and choosing Start from the drop down. Stop a Batch Job Stop a running batch job by selecting the execution and clicking Stop . View Batch Job Execution Details Job execution details are shown for each execution listed in the table. 22.3. Configure Security for Batch Jobs You can configure the batch-jberet subsystem to run batch jobs with an Elytron security domain. This allows batch jobs to be securely suspended and resumed by the same secured identity. For example, a secured RESTful endpoint is created to initiate batch jobs using the batch-jberet subsystem. If both the RESTful endpoint and batch-jberet subsystem were secured using the same security domain, or the batch-jberet security domain trusted the RESTful endpoint's security domain, batch jobs initiated in this manner could be securely paused and resumed by the same secured identity. Use the following management CLI command to update the security-domain attribute to configure security for batch jobs. Note Batch jobs require the org.wildfly.extension.batch.jberet.deployment.BatchPermission permission. It provides start , stop , restart , abandon , and read permissions that align with javax.batch.operations.JobOperator . The default-permission-mapper mapper provides the org.wildfly.extension.batch.jberet.deployment.BatchPermission permission.
[ "<subsystem xmlns=\"urn:jboss:domain:batch-jberet:2.0\"> <default-job-repository name=\"in-memory\"/> <default-thread-pool name=\"batch\"/> <job-repository name=\"in-memory\"> <in-memory/> </job-repository> <thread-pool name=\"batch\"> <max-threads count=\"10\"/> <keepalive-time time=\"30\" unit=\"seconds\"/> </thread-pool> </subsystem>", "/subsystem=batch-jberet:write-attribute(name=restart-jobs-on-resume,value=false)", "/subsystem=batch-jberet/in-memory-job-repository= REPOSITORY_NAME :add", "/subsystem=batch-jberet/jdbc-job-repository= REPOSITORY_NAME :add(data-source= DATASOURCE )", "/subsystem=batch-jberet:write-attribute(name=default-job-repository,value= REPOSITORY_NAME )", "reload", "/subsystem=batch-jberet/thread-pool= THREAD_POOL_NAME :add(max-threads=10)", "/subsystem=batch-jberet/thread-pool= THREAD_POOL_NAME :write-attribute(name=keepalive-time,value={time=60,unit=SECONDS})", "/subsystem=batch-jberet/thread-factory= THREAD_FACTORY_NAME :add", "/subsystem=batch-jberet/thread-pool= THREAD_POOL_NAME :write-attribute(name=thread-factory,value= THREAD_FACTORY_NAME )", "reload", "/subsystem=batch-jberet:write-attribute(name=default-thread-pool,value= THREAD_POOL_NAME )", "reload", "/subsystem=batch-jberet/thread-pool= THREAD_POOL_NAME :read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"active-count\" => 0, \"completed-task-count\" => 0L, \"current-thread-count\" => 0, \"keepalive-time\" => undefined, \"largest-thread-count\" => 0, \"max-threads\" => 15, \"name\" => \" THREAD_POOL_NAME \", \"queue-size\" => 0, \"rejected-count\" => 0, \"task-count\" => 0L, \"thread-factory\" => \" THREAD_FACTORY_NAME \" } }", "/deployment= DEPLOYMENT_NAME /subsystem=batch-jberet:restart-job(execution-id= EXECUTION_ID ,properties={ PROPERTY = VALUE })", "/deployment= DEPLOYMENT_NAME /subsystem=batch-jberet:start-job(job-xml-name= JOB_XML_NAME ,properties={ PROPERTY = VALUE })", "/deployment= DEPLOYMENT_NAME /subsystem=batch-jberet:stop-job(execution-id= EXECUTION_ID )", "/deployment= DEPLOYMENT_NAME /subsystem=batch-jberet:read-resource(recursive=true,include-runtime=true) { \"outcome\" => \"success\", \"result\" => {\"job\" => {\"import-file\" => { \"instance-count\" => 2, \"running-executions\" => 0, \"execution\" => { \"2\" => { \"batch-status\" => \"COMPLETED\", \"create-time\" => \"2016-04-11T22:03:12.708-0400\", \"end-time\" => \"2016-04-11T22:03:12.718-0400\", \"exit-status\" => \"COMPLETED\", \"instance-id\" => 58L, \"last-updated-time\" => \"2016-04-11T22:03:12.719-0400\", \"start-time\" => \"2016-04-11T22:03:12.708-0400\" }, \"1\" => { \"batch-status\" => \"FAILED\", \"create-time\" => \"2016-04-11T21:57:17.567-0400\", \"end-time\" => \"2016-04-11T21:57:17.596-0400\", \"exit-status\" => \"Error : org.hibernate.exception.ConstraintViolationException: could not execute statement\", \"instance-id\" => 15L, \"last-updated-time\" => \"2016-04-11T21:57:17.597-0400\", \"start-time\" => \"2016-04-11T21:57:17.567-0400\" } } }}} }", "/subsystem=batch-jberet:write-attribute(name=security-domain, value=ExampleDomain) reload" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/configuring_batch_applications
Chapter 3. AlertmanagerConfig [monitoring.coreos.com/v1beta1]
Chapter 3. AlertmanagerConfig [monitoring.coreos.com/v1beta1] Description AlertmanagerConfig configures the Prometheus Alertmanager, specifying how alerts should be grouped, inhibited and notified to external systems. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object AlertmanagerConfigSpec is a specification of the desired behavior of the Alertmanager configuration. By definition, the Alertmanager configuration only applies to alerts for which the namespace label is equal to the namespace of the AlertmanagerConfig resource. 3.1.1. .spec Description AlertmanagerConfigSpec is a specification of the desired behavior of the Alertmanager configuration. By definition, the Alertmanager configuration only applies to alerts for which the namespace label is equal to the namespace of the AlertmanagerConfig resource. Type object Property Type Description inhibitRules array List of inhibition rules. The rules will only apply to alerts matching the resource's namespace. inhibitRules[] object InhibitRule defines an inhibition rule that allows to mute alerts when other alerts are already firing. See https://prometheus.io/docs/alerting/latest/configuration/#inhibit_rule receivers array List of receivers. receivers[] object Receiver defines one or more notification integrations. route object The Alertmanager route definition for alerts matching the resource's namespace. If present, it will be added to the generated Alertmanager configuration as a first-level route. timeIntervals array List of TimeInterval specifying when the routes should be muted or active. timeIntervals[] object TimeInterval specifies the periods in time when notifications will be muted or active. 3.1.2. .spec.inhibitRules Description List of inhibition rules. The rules will only apply to alerts matching the resource's namespace. Type array 3.1.3. .spec.inhibitRules[] Description InhibitRule defines an inhibition rule that allows to mute alerts when other alerts are already firing. See https://prometheus.io/docs/alerting/latest/configuration/#inhibit_rule Type object Property Type Description equal array (string) Labels that must have an equal value in the source and target alert for the inhibition to take effect. sourceMatch array Matchers for which one or more alerts have to exist for the inhibition to take effect. The operator enforces that the alert matches the resource's namespace. sourceMatch[] object Matcher defines how to match on alert's labels. targetMatch array Matchers that have to be fulfilled in the alerts to be muted. The operator enforces that the alert matches the resource's namespace. targetMatch[] object Matcher defines how to match on alert's labels. 3.1.4. .spec.inhibitRules[].sourceMatch Description Matchers for which one or more alerts have to exist for the inhibition to take effect. The operator enforces that the alert matches the resource's namespace. Type array 3.1.5. .spec.inhibitRules[].sourceMatch[] Description Matcher defines how to match on alert's labels. Type object Required name Property Type Description matchType string Match operator, one of = (equal to), != (not equal to), =~ (regex match) or !~ (not regex match). Negative operators ( != and !~ ) require Alertmanager >= v0.22.0. name string Label to match. value string Label value to match. 3.1.6. .spec.inhibitRules[].targetMatch Description Matchers that have to be fulfilled in the alerts to be muted. The operator enforces that the alert matches the resource's namespace. Type array 3.1.7. .spec.inhibitRules[].targetMatch[] Description Matcher defines how to match on alert's labels. Type object Required name Property Type Description matchType string Match operator, one of = (equal to), != (not equal to), =~ (regex match) or !~ (not regex match). Negative operators ( != and !~ ) require Alertmanager >= v0.22.0. name string Label to match. value string Label value to match. 3.1.8. .spec.receivers Description List of receivers. Type array 3.1.9. .spec.receivers[] Description Receiver defines one or more notification integrations. Type object Required name Property Type Description discordConfigs array List of Slack configurations. discordConfigs[] object DiscordConfig configures notifications via Discord. See https://prometheus.io/docs/alerting/latest/configuration/#discord_config emailConfigs array List of Email configurations. emailConfigs[] object EmailConfig configures notifications via Email. msteamsConfigs array List of MSTeams configurations. It requires Alertmanager >= 0.26.0. msteamsConfigs[] object MSTeamsConfig configures notifications via Microsoft Teams. It requires Alertmanager >= 0.26.0. name string Name of the receiver. Must be unique across all items from the list. opsgenieConfigs array List of OpsGenie configurations. opsgenieConfigs[] object OpsGenieConfig configures notifications via OpsGenie. See https://prometheus.io/docs/alerting/latest/configuration/#opsgenie_config pagerdutyConfigs array List of PagerDuty configurations. pagerdutyConfigs[] object PagerDutyConfig configures notifications via PagerDuty. See https://prometheus.io/docs/alerting/latest/configuration/#pagerduty_config pushoverConfigs array List of Pushover configurations. pushoverConfigs[] object PushoverConfig configures notifications via Pushover. See https://prometheus.io/docs/alerting/latest/configuration/#pushover_config slackConfigs array List of Slack configurations. slackConfigs[] object SlackConfig configures notifications via Slack. See https://prometheus.io/docs/alerting/latest/configuration/#slack_config snsConfigs array List of SNS configurations snsConfigs[] object SNSConfig configures notifications via AWS SNS. See https://prometheus.io/docs/alerting/latest/configuration/#sns_configs telegramConfigs array List of Telegram configurations. telegramConfigs[] object TelegramConfig configures notifications via Telegram. See https://prometheus.io/docs/alerting/latest/configuration/#telegram_config victoropsConfigs array List of VictorOps configurations. victoropsConfigs[] object VictorOpsConfig configures notifications via VictorOps. See https://prometheus.io/docs/alerting/latest/configuration/#victorops_config webexConfigs array List of Webex configurations. webexConfigs[] object WebexConfig configures notification via Cisco Webex See https://prometheus.io/docs/alerting/latest/configuration/#webex_config webhookConfigs array List of webhook configurations. webhookConfigs[] object WebhookConfig configures notifications via a generic receiver supporting the webhook payload. See https://prometheus.io/docs/alerting/latest/configuration/#webhook_config wechatConfigs array List of WeChat configurations. wechatConfigs[] object WeChatConfig configures notifications via WeChat. See https://prometheus.io/docs/alerting/latest/configuration/#wechat_config 3.1.10. .spec.receivers[].discordConfigs Description List of Slack configurations. Type array 3.1.11. .spec.receivers[].discordConfigs[] Description DiscordConfig configures notifications via Discord. See https://prometheus.io/docs/alerting/latest/configuration/#discord_config Type object Property Type Description apiURL object The secret's key that contains the Discord webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. httpConfig object HTTP client configuration. message string The template of the message's body. sendResolved boolean Whether or not to notify about resolved alerts. title string The template of the message's title. 3.1.12. .spec.receivers[].discordConfigs[].apiURL Description The secret's key that contains the Discord webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.13. .spec.receivers[].discordConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.14. .spec.receivers[].discordConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.15. .spec.receivers[].discordConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.16. .spec.receivers[].discordConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.17. .spec.receivers[].discordConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.18. .spec.receivers[].discordConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.19. .spec.receivers[].discordConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.20. .spec.receivers[].discordConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.21. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.22. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.23. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.24. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.25. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.26. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.27. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.28. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.29. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.30. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.31. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.32. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.33. .spec.receivers[].emailConfigs Description List of Email configurations. Type array 3.1.34. .spec.receivers[].emailConfigs[] Description EmailConfig configures notifications via Email. Type object Property Type Description authIdentity string The identity to use for authentication. authPassword object The secret's key that contains the password to use for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. authSecret object The secret's key that contains the CRAM-MD5 secret. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. authUsername string The username to use for authentication. from string The sender address. headers array Further headers email header key/value pairs. Overrides any headers previously set by the notification implementation. headers[] object KeyValue defines a (key, value) tuple. hello string The hostname to identify to the SMTP server. html string The HTML body of the email notification. requireTLS boolean The SMTP TLS requirement. Note that Go does not support unencrypted connections to remote SMTP endpoints. sendResolved boolean Whether or not to notify about resolved alerts. smarthost string The SMTP host and port through which emails are sent. E.g. example.com:25 text string The text body of the email notification. tlsConfig object TLS configuration to string The email address to send notifications to. 3.1.35. .spec.receivers[].emailConfigs[].authPassword Description The secret's key that contains the password to use for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.36. .spec.receivers[].emailConfigs[].authSecret Description The secret's key that contains the CRAM-MD5 secret. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.37. .spec.receivers[].emailConfigs[].headers Description Further headers email header key/value pairs. Overrides any headers previously set by the notification implementation. Type array 3.1.38. .spec.receivers[].emailConfigs[].headers[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.39. .spec.receivers[].emailConfigs[].tlsConfig Description TLS configuration Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.40. .spec.receivers[].emailConfigs[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.41. .spec.receivers[].emailConfigs[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.42. .spec.receivers[].emailConfigs[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.43. .spec.receivers[].emailConfigs[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.44. .spec.receivers[].emailConfigs[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.45. .spec.receivers[].emailConfigs[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.46. .spec.receivers[].emailConfigs[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.47. .spec.receivers[].msteamsConfigs Description List of MSTeams configurations. It requires Alertmanager >= 0.26.0. Type array 3.1.48. .spec.receivers[].msteamsConfigs[] Description MSTeamsConfig configures notifications via Microsoft Teams. It requires Alertmanager >= 0.26.0. Type object Required webhookUrl Property Type Description httpConfig object HTTP client configuration. sendResolved boolean Whether to notify about resolved alerts. summary string Message summary template. It requires Alertmanager >= 0.27.0. text string Message body template. title string Message title template. webhookUrl object MSTeams webhook URL. 3.1.49. .spec.receivers[].msteamsConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.50. .spec.receivers[].msteamsConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.51. .spec.receivers[].msteamsConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.52. .spec.receivers[].msteamsConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.53. .spec.receivers[].msteamsConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.54. .spec.receivers[].msteamsConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.55. .spec.receivers[].msteamsConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.56. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.57. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.58. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.59. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.60. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.61. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.62. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.63. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.64. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.65. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.66. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.67. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.68. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.69. .spec.receivers[].msteamsConfigs[].webhookUrl Description MSTeams webhook URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.70. .spec.receivers[].opsgenieConfigs Description List of OpsGenie configurations. Type array 3.1.71. .spec.receivers[].opsgenieConfigs[] Description OpsGenieConfig configures notifications via OpsGenie. See https://prometheus.io/docs/alerting/latest/configuration/#opsgenie_config Type object Property Type Description actions string Comma separated list of actions that will be available for the alert. apiKey object The secret's key that contains the OpsGenie API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. apiURL string The URL to send OpsGenie API requests to. description string Description of the incident. details array A set of arbitrary key/value pairs that provide further detail about the incident. details[] object KeyValue defines a (key, value) tuple. entity string Optional field that can be used to specify which domain alert is related to. httpConfig object HTTP client configuration. message string Alert text limited to 130 characters. note string Additional alert note. priority string Priority level of alert. Possible values are P1, P2, P3, P4, and P5. responders array List of responders responsible for notifications. responders[] object OpsGenieConfigResponder defines a responder to an incident. One of id , name or username has to be defined. sendResolved boolean Whether or not to notify about resolved alerts. source string Backlink to the sender of the notification. tags string Comma separated list of tags attached to the notifications. 3.1.72. .spec.receivers[].opsgenieConfigs[].apiKey Description The secret's key that contains the OpsGenie API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.73. .spec.receivers[].opsgenieConfigs[].details Description A set of arbitrary key/value pairs that provide further detail about the incident. Type array 3.1.74. .spec.receivers[].opsgenieConfigs[].details[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.75. .spec.receivers[].opsgenieConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.76. .spec.receivers[].opsgenieConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.77. .spec.receivers[].opsgenieConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.78. .spec.receivers[].opsgenieConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.79. .spec.receivers[].opsgenieConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.80. .spec.receivers[].opsgenieConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.81. .spec.receivers[].opsgenieConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.82. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.83. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.84. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.85. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.86. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.87. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.88. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.89. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.90. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.91. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.92. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.93. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.94. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.95. .spec.receivers[].opsgenieConfigs[].responders Description List of responders responsible for notifications. Type array 3.1.96. .spec.receivers[].opsgenieConfigs[].responders[] Description OpsGenieConfigResponder defines a responder to an incident. One of id , name or username has to be defined. Type object Required type Property Type Description id string ID of the responder. name string Name of the responder. type string Type of responder. username string Username of the responder. 3.1.97. .spec.receivers[].pagerdutyConfigs Description List of PagerDuty configurations. Type array 3.1.98. .spec.receivers[].pagerdutyConfigs[] Description PagerDutyConfig configures notifications via PagerDuty. See https://prometheus.io/docs/alerting/latest/configuration/#pagerduty_config Type object Property Type Description class string The class/type of the event. client string Client identification. clientURL string Backlink to the sender of notification. component string The part or component of the affected system that is broken. description string Description of the incident. details array Arbitrary key/value pairs that provide further detail about the incident. details[] object KeyValue defines a (key, value) tuple. group string A cluster or grouping of sources. httpConfig object HTTP client configuration. pagerDutyImageConfigs array A list of image details to attach that provide further detail about an incident. pagerDutyImageConfigs[] object PagerDutyImageConfig attaches images to an incident pagerDutyLinkConfigs array A list of link details to attach that provide further detail about an incident. pagerDutyLinkConfigs[] object PagerDutyLinkConfig attaches text links to an incident routingKey object The secret's key that contains the PagerDuty integration key (when using Events API v2). Either this field or serviceKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. sendResolved boolean Whether or not to notify about resolved alerts. serviceKey object The secret's key that contains the PagerDuty service key (when using integration type "Prometheus"). Either this field or routingKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. severity string Severity of the incident. url string The URL to send requests to. 3.1.99. .spec.receivers[].pagerdutyConfigs[].details Description Arbitrary key/value pairs that provide further detail about the incident. Type array 3.1.100. .spec.receivers[].pagerdutyConfigs[].details[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.101. .spec.receivers[].pagerdutyConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.102. .spec.receivers[].pagerdutyConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.103. .spec.receivers[].pagerdutyConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.104. .spec.receivers[].pagerdutyConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.105. .spec.receivers[].pagerdutyConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.106. .spec.receivers[].pagerdutyConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.107. .spec.receivers[].pagerdutyConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.108. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.109. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.110. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.111. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.112. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.113. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.114. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.115. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.116. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.117. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.118. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.119. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.120. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.121. .spec.receivers[].pagerdutyConfigs[].pagerDutyImageConfigs Description A list of image details to attach that provide further detail about an incident. Type array 3.1.122. .spec.receivers[].pagerdutyConfigs[].pagerDutyImageConfigs[] Description PagerDutyImageConfig attaches images to an incident Type object Property Type Description alt string Alt is the optional alternative text for the image. href string Optional URL; makes the image a clickable link. src string Src of the image being attached to the incident 3.1.123. .spec.receivers[].pagerdutyConfigs[].pagerDutyLinkConfigs Description A list of link details to attach that provide further detail about an incident. Type array 3.1.124. .spec.receivers[].pagerdutyConfigs[].pagerDutyLinkConfigs[] Description PagerDutyLinkConfig attaches text links to an incident Type object Property Type Description alt string Text that describes the purpose of the link, and can be used as the link's text. href string Href is the URL of the link to be attached 3.1.125. .spec.receivers[].pagerdutyConfigs[].routingKey Description The secret's key that contains the PagerDuty integration key (when using Events API v2). Either this field or serviceKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.126. .spec.receivers[].pagerdutyConfigs[].serviceKey Description The secret's key that contains the PagerDuty service key (when using integration type "Prometheus"). Either this field or routingKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.127. .spec.receivers[].pushoverConfigs Description List of Pushover configurations. Type array 3.1.128. .spec.receivers[].pushoverConfigs[] Description PushoverConfig configures notifications via Pushover. See https://prometheus.io/docs/alerting/latest/configuration/#pushover_config Type object Property Type Description device string The name of a device to send the notification to expire string How long your notification will continue to be retried for, unless the user acknowledges the notification. html boolean Whether notification message is HTML or plain text. httpConfig object HTTP client configuration. message string Notification message. priority string Priority, see https://pushover.net/api#priority retry string How often the Pushover servers will send the same notification to the user. Must be at least 30 seconds. sendResolved boolean Whether or not to notify about resolved alerts. sound string The name of one of the sounds supported by device clients to override the user's default sound choice title string Notification title. token object The secret's key that contains the registered application's API token, see https://pushover.net/apps . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either token or tokenFile is required. tokenFile string The token file that contains the registered application's API token, see https://pushover.net/apps . Either token or tokenFile is required. It requires Alertmanager >= v0.26.0. url string A supplementary URL shown alongside the message. urlTitle string A title for supplementary URL, otherwise just the URL is shown userKey object The secret's key that contains the recipient user's user key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either userKey or userKeyFile is required. userKeyFile string The user key file that contains the recipient user's user key. Either userKey or userKeyFile is required. It requires Alertmanager >= v0.26.0. 3.1.129. .spec.receivers[].pushoverConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.130. .spec.receivers[].pushoverConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.131. .spec.receivers[].pushoverConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.132. .spec.receivers[].pushoverConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.133. .spec.receivers[].pushoverConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.134. .spec.receivers[].pushoverConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.135. .spec.receivers[].pushoverConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.136. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.137. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.138. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.139. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.140. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.141. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.142. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.143. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.144. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.145. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.146. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.147. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.148. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.149. .spec.receivers[].pushoverConfigs[].token Description The secret's key that contains the registered application's API token, see https://pushover.net/apps . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either token or tokenFile is required. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.150. .spec.receivers[].pushoverConfigs[].userKey Description The secret's key that contains the recipient user's user key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either userKey or userKeyFile is required. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.151. .spec.receivers[].slackConfigs Description List of Slack configurations. Type array 3.1.152. .spec.receivers[].slackConfigs[] Description SlackConfig configures notifications via Slack. See https://prometheus.io/docs/alerting/latest/configuration/#slack_config Type object Property Type Description actions array A list of Slack actions that are sent with each notification. actions[] object SlackAction configures a single Slack action that is sent with each notification. See https://api.slack.com/docs/message-attachments#action_fields and https://api.slack.com/docs/message-buttons for more information. apiURL object The secret's key that contains the Slack webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. callbackId string channel string The channel or user to send notifications to. color string fallback string fields array A list of Slack fields that are sent with each notification. fields[] object SlackField configures a single Slack field that is sent with each notification. Each field must contain a title, value, and optionally, a boolean value to indicate if the field is short enough to be displayed to other fields designated as short. See https://api.slack.com/docs/message-attachments#fields for more information. footer string httpConfig object HTTP client configuration. iconEmoji string iconURL string imageURL string linkNames boolean mrkdwnIn array (string) pretext string sendResolved boolean Whether or not to notify about resolved alerts. shortFields boolean text string thumbURL string title string titleLink string username string 3.1.153. .spec.receivers[].slackConfigs[].actions Description A list of Slack actions that are sent with each notification. Type array 3.1.154. .spec.receivers[].slackConfigs[].actions[] Description SlackAction configures a single Slack action that is sent with each notification. See https://api.slack.com/docs/message-attachments#action_fields and https://api.slack.com/docs/message-buttons for more information. Type object Required text type Property Type Description confirm object SlackConfirmationField protect users from destructive actions or particularly distinguished decisions by asking them to confirm their button click one more time. See https://api.slack.com/docs/interactive-message-field-guide#confirmation_fields for more information. name string style string text string type string url string value string 3.1.155. .spec.receivers[].slackConfigs[].actions[].confirm Description SlackConfirmationField protect users from destructive actions or particularly distinguished decisions by asking them to confirm their button click one more time. See https://api.slack.com/docs/interactive-message-field-guide#confirmation_fields for more information. Type object Required text Property Type Description dismissText string okText string text string title string 3.1.156. .spec.receivers[].slackConfigs[].apiURL Description The secret's key that contains the Slack webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.157. .spec.receivers[].slackConfigs[].fields Description A list of Slack fields that are sent with each notification. Type array 3.1.158. .spec.receivers[].slackConfigs[].fields[] Description SlackField configures a single Slack field that is sent with each notification. Each field must contain a title, value, and optionally, a boolean value to indicate if the field is short enough to be displayed to other fields designated as short. See https://api.slack.com/docs/message-attachments#fields for more information. Type object Required title value Property Type Description short boolean title string value string 3.1.159. .spec.receivers[].slackConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.160. .spec.receivers[].slackConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.161. .spec.receivers[].slackConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.162. .spec.receivers[].slackConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.163. .spec.receivers[].slackConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.164. .spec.receivers[].slackConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.165. .spec.receivers[].slackConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.166. .spec.receivers[].slackConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.167. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.168. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.169. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.170. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.171. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.172. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.173. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.174. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.175. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.176. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.177. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.178. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.179. .spec.receivers[].snsConfigs Description List of SNS configurations Type array 3.1.180. .spec.receivers[].snsConfigs[] Description SNSConfig configures notifications via AWS SNS. See https://prometheus.io/docs/alerting/latest/configuration/#sns_configs Type object Property Type Description apiURL string The SNS API URL i.e. https://sns.us-east-2.amazonaws.com . If not specified, the SNS API URL from the SNS SDK will be used. attributes object (string) SNS message attributes. httpConfig object HTTP client configuration. message string The message content of the SNS notification. phoneNumber string Phone number if message is delivered via SMS in E.164 format. If you don't specify this value, you must specify a value for the TopicARN or TargetARN. sendResolved boolean Whether or not to notify about resolved alerts. sigv4 object Configures AWS's Signature Verification 4 signing process to sign requests. subject string Subject line when the message is delivered to email endpoints. targetARN string The mobile platform endpoint ARN if message is delivered via mobile notifications. If you don't specify this value, you must specify a value for the topic_arn or PhoneNumber. topicARN string SNS topic ARN, i.e. arn:aws:sns:us-east-2:698519295917:My-Topic If you don't specify this value, you must specify a value for the PhoneNumber or TargetARN. 3.1.181. .spec.receivers[].snsConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.182. .spec.receivers[].snsConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.183. .spec.receivers[].snsConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.184. .spec.receivers[].snsConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.185. .spec.receivers[].snsConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.186. .spec.receivers[].snsConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.187. .spec.receivers[].snsConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.188. .spec.receivers[].snsConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.189. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.190. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.191. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.192. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.193. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.194. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.195. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.196. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.197. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.198. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.199. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.200. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.201. .spec.receivers[].snsConfigs[].sigv4 Description Configures AWS's Signature Verification 4 signing process to sign requests. Type object Property Type Description accessKey object AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. profile string Profile is the named AWS profile used to authenticate. region string Region is the AWS region. If blank, the region from the default credentials chain used. roleArn string RoleArn is the named AWS profile used to authenticate. secretKey object SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. 3.1.202. .spec.receivers[].snsConfigs[].sigv4.accessKey Description AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.203. .spec.receivers[].snsConfigs[].sigv4.secretKey Description SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.204. .spec.receivers[].telegramConfigs Description List of Telegram configurations. Type array 3.1.205. .spec.receivers[].telegramConfigs[] Description TelegramConfig configures notifications via Telegram. See https://prometheus.io/docs/alerting/latest/configuration/#telegram_config Type object Property Type Description apiURL string The Telegram API URL i.e. https://api.telegram.org . If not specified, default API URL will be used. botToken object Telegram bot token. It is mutually exclusive with botTokenFile . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either botToken or botTokenFile is required. botTokenFile string File to read the Telegram bot token from. It is mutually exclusive with botToken . Either botToken or botTokenFile is required. It requires Alertmanager >= v0.26.0. chatID integer The Telegram chat ID. disableNotifications boolean Disable telegram notifications httpConfig object HTTP client configuration. message string Message template parseMode string Parse mode for telegram message sendResolved boolean Whether to notify about resolved alerts. 3.1.206. .spec.receivers[].telegramConfigs[].botToken Description Telegram bot token. It is mutually exclusive with botTokenFile . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either botToken or botTokenFile is required. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.207. .spec.receivers[].telegramConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.208. .spec.receivers[].telegramConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.209. .spec.receivers[].telegramConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.210. .spec.receivers[].telegramConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.211. .spec.receivers[].telegramConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.212. .spec.receivers[].telegramConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.213. .spec.receivers[].telegramConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.214. .spec.receivers[].telegramConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.215. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.216. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.217. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.218. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.219. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.220. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.221. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.222. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.223. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.224. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.225. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.226. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.227. .spec.receivers[].victoropsConfigs Description List of VictorOps configurations. Type array 3.1.228. .spec.receivers[].victoropsConfigs[] Description VictorOpsConfig configures notifications via VictorOps. See https://prometheus.io/docs/alerting/latest/configuration/#victorops_config Type object Property Type Description apiKey object The secret's key that contains the API key to use when talking to the VictorOps API. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. apiUrl string The VictorOps API URL. customFields array Additional custom fields for notification. customFields[] object KeyValue defines a (key, value) tuple. entityDisplayName string Contains summary of the alerted problem. httpConfig object The HTTP client's configuration. messageType string Describes the behavior of the alert (CRITICAL, WARNING, INFO). monitoringTool string The monitoring tool the state message is from. routingKey string A key used to map the alert to a team. sendResolved boolean Whether or not to notify about resolved alerts. stateMessage string Contains long explanation of the alerted problem. 3.1.229. .spec.receivers[].victoropsConfigs[].apiKey Description The secret's key that contains the API key to use when talking to the VictorOps API. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.230. .spec.receivers[].victoropsConfigs[].customFields Description Additional custom fields for notification. Type array 3.1.231. .spec.receivers[].victoropsConfigs[].customFields[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.232. .spec.receivers[].victoropsConfigs[].httpConfig Description The HTTP client's configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.233. .spec.receivers[].victoropsConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.234. .spec.receivers[].victoropsConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.235. .spec.receivers[].victoropsConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.236. .spec.receivers[].victoropsConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.237. .spec.receivers[].victoropsConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.238. .spec.receivers[].victoropsConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.239. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.240. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.241. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.242. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.243. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.244. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.245. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.246. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.247. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.248. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.249. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.250. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.251. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.252. .spec.receivers[].webexConfigs Description List of Webex configurations. Type array 3.1.253. .spec.receivers[].webexConfigs[] Description WebexConfig configures notification via Cisco Webex See https://prometheus.io/docs/alerting/latest/configuration/#webex_config Type object Required roomID Property Type Description apiURL string The Webex Teams API URL i.e. https://webexapis.com/v1/messages httpConfig object The HTTP client's configuration. You must use this configuration to supply the bot token as part of the HTTP Authorization header. message string Message template roomID string ID of the Webex Teams room where to send the messages. sendResolved boolean Whether to notify about resolved alerts. 3.1.254. .spec.receivers[].webexConfigs[].httpConfig Description The HTTP client's configuration. You must use this configuration to supply the bot token as part of the HTTP Authorization header. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.255. .spec.receivers[].webexConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.256. .spec.receivers[].webexConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.257. .spec.receivers[].webexConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.258. .spec.receivers[].webexConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.259. .spec.receivers[].webexConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.260. .spec.receivers[].webexConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.261. .spec.receivers[].webexConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.262. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.263. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.264. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.265. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.266. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.267. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.268. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.269. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.270. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.271. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.272. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.273. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.274. .spec.receivers[].webhookConfigs Description List of webhook configurations. Type array 3.1.275. .spec.receivers[].webhookConfigs[] Description WebhookConfig configures notifications via a generic receiver supporting the webhook payload. See https://prometheus.io/docs/alerting/latest/configuration/#webhook_config Type object Property Type Description httpConfig object HTTP client configuration. maxAlerts integer Maximum number of alerts to be sent per webhook message. When 0, all alerts are included. sendResolved boolean Whether or not to notify about resolved alerts. url string The URL to send HTTP POST requests to. urlSecret takes precedence over url . One of urlSecret and url should be defined. urlSecret object The secret's key that contains the webhook URL to send HTTP requests to. urlSecret takes precedence over url . One of urlSecret and url should be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. 3.1.276. .spec.receivers[].webhookConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.277. .spec.receivers[].webhookConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.278. .spec.receivers[].webhookConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.279. .spec.receivers[].webhookConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.280. .spec.receivers[].webhookConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.281. .spec.receivers[].webhookConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.282. .spec.receivers[].webhookConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.283. .spec.receivers[].webhookConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.284. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.285. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.286. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.287. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.288. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.289. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.290. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.291. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.292. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.293. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.294. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.295. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.296. .spec.receivers[].webhookConfigs[].urlSecret Description The secret's key that contains the webhook URL to send HTTP requests to. urlSecret takes precedence over url . One of urlSecret and url should be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.297. .spec.receivers[].wechatConfigs Description List of WeChat configurations. Type array 3.1.298. .spec.receivers[].wechatConfigs[] Description WeChatConfig configures notifications via WeChat. See https://prometheus.io/docs/alerting/latest/configuration/#wechat_config Type object Property Type Description agentID string apiSecret object The secret's key that contains the WeChat API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. apiURL string The WeChat API URL. corpID string The corp id for authentication. httpConfig object HTTP client configuration. message string API request data as defined by the WeChat API. messageType string sendResolved boolean Whether or not to notify about resolved alerts. toParty string toTag string toUser string 3.1.299. .spec.receivers[].wechatConfigs[].apiSecret Description The secret's key that contains the WeChat API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.300. .spec.receivers[].wechatConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.301. .spec.receivers[].wechatConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.302. .spec.receivers[].wechatConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.303. .spec.receivers[].wechatConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.304. .spec.receivers[].wechatConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.305. .spec.receivers[].wechatConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.306. .spec.receivers[].wechatConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.307. .spec.receivers[].wechatConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.308. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.309. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.310. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.311. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.312. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.313. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.314. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.315. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.316. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.317. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.318. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.319. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.320. .spec.route Description The Alertmanager route definition for alerts matching the resource's namespace. If present, it will be added to the generated Alertmanager configuration as a first-level route. Type object Property Type Description activeTimeIntervals array (string) ActiveTimeIntervals is a list of TimeInterval names when this route should be active. continue boolean Boolean indicating whether an alert should continue matching subsequent sibling nodes. It will always be overridden to true for the first-level route by the Prometheus operator. groupBy array (string) List of labels to group by. Labels must not be repeated (unique list). Special label "... " (aggregate by all possible labels), if provided, must be the only element in the list. groupInterval string How long to wait before sending an updated notification. Must match the regular expression`^(( )y)?(([0-9] )w)?(( )d)?(([0-9] )h)?(( )m)?(([0-9] )s)?(([0-9]+)ms)?USD` Example: "5m" groupWait string How long to wait before sending the initial notification. Must match the regular expression`^(( )y)?(([0-9] )w)?(( )d)?(([0-9] )h)?(( )m)?(([0-9] )s)?(([0-9]+)ms)?USD` Example: "30s" matchers array List of matchers that the alert's labels should match. For the first level route, the operator removes any existing equality and regexp matcher on the namespace label and adds a namespace: <object namespace> matcher. matchers[] object Matcher defines how to match on alert's labels. muteTimeIntervals array (string) Note: this comment applies to the field definition above but appears below otherwise it gets included in the generated manifest. CRD schema doesn't support self-referential types for now (see https://github.com/kubernetes/kubernetes/issues/62872 ). We have to use an alternative type to circumvent the limitation. The downside is that the Kube API can't validate the data beyond the fact that it is a valid JSON representation. MuteTimeIntervals is a list of TimeInterval names that will mute this route when matched. receiver string Name of the receiver for this route. If not empty, it should be listed in the receivers field. repeatInterval string How long to wait before repeating the last notification. Must match the regular expression`^(( )y)?(([0-9] )w)?(( )d)?(([0-9] )h)?(( )m)?(([0-9] )s)?(([0-9]+)ms)?USD` Example: "4h" routes array (undefined) Child routes. 3.1.321. .spec.route.matchers Description List of matchers that the alert's labels should match. For the first level route, the operator removes any existing equality and regexp matcher on the namespace label and adds a namespace: <object namespace> matcher. Type array 3.1.322. .spec.route.matchers[] Description Matcher defines how to match on alert's labels. Type object Required name Property Type Description matchType string Match operator, one of = (equal to), != (not equal to), =~ (regex match) or !~ (not regex match). Negative operators ( != and !~ ) require Alertmanager >= v0.22.0. name string Label to match. value string Label value to match. 3.1.323. .spec.timeIntervals Description List of TimeInterval specifying when the routes should be muted or active. Type array 3.1.324. .spec.timeIntervals[] Description TimeInterval specifies the periods in time when notifications will be muted or active. Type object Property Type Description name string Name of the time interval. timeIntervals array TimeIntervals is a list of TimePeriod. timeIntervals[] object TimePeriod describes periods of time. 3.1.325. .spec.timeIntervals[].timeIntervals Description TimeIntervals is a list of TimePeriod. Type array 3.1.326. .spec.timeIntervals[].timeIntervals[] Description TimePeriod describes periods of time. Type object Property Type Description daysOfMonth array DaysOfMonth is a list of DayOfMonthRange daysOfMonth[] object DayOfMonthRange is an inclusive range of days of the month beginning at 1 months array (string) Months is a list of MonthRange times array Times is a list of TimeRange times[] object TimeRange defines a start and end time in 24hr format weekdays array (string) Weekdays is a list of WeekdayRange years array (string) Years is a list of YearRange 3.1.327. .spec.timeIntervals[].timeIntervals[].daysOfMonth Description DaysOfMonth is a list of DayOfMonthRange Type array 3.1.328. .spec.timeIntervals[].timeIntervals[].daysOfMonth[] Description DayOfMonthRange is an inclusive range of days of the month beginning at 1 Type object Property Type Description end integer End of the inclusive range start integer Start of the inclusive range 3.1.329. .spec.timeIntervals[].timeIntervals[].times Description Times is a list of TimeRange Type array 3.1.330. .spec.timeIntervals[].timeIntervals[].times[] Description TimeRange defines a start and end time in 24hr format Type object Property Type Description endTime string EndTime is the end time in 24hr format. startTime string StartTime is the start time in 24hr format. 3.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1beta1/alertmanagerconfigs GET : list objects of kind AlertmanagerConfig /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs DELETE : delete collection of AlertmanagerConfig GET : list objects of kind AlertmanagerConfig POST : create an AlertmanagerConfig /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs/{name} DELETE : delete an AlertmanagerConfig GET : read the specified AlertmanagerConfig PATCH : partially update the specified AlertmanagerConfig PUT : replace the specified AlertmanagerConfig 3.2.1. /apis/monitoring.coreos.com/v1beta1/alertmanagerconfigs HTTP method GET Description list objects of kind AlertmanagerConfig Table 3.1. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfigList schema 401 - Unauthorized Empty 3.2.2. /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs HTTP method DELETE Description delete collection of AlertmanagerConfig Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AlertmanagerConfig Table 3.3. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfigList schema 401 - Unauthorized Empty HTTP method POST Description create an AlertmanagerConfig Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body AlertmanagerConfig schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 201 - Created AlertmanagerConfig schema 202 - Accepted AlertmanagerConfig schema 401 - Unauthorized Empty 3.2.3. /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the AlertmanagerConfig HTTP method DELETE Description delete an AlertmanagerConfig Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AlertmanagerConfig Table 3.10. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AlertmanagerConfig Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AlertmanagerConfig Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body AlertmanagerConfig schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 201 - Created AlertmanagerConfig schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/monitoring_apis/alertmanagerconfig-monitoring-coreos-com-v1beta1
Index
Index A active/active configuration definition, Overview of DM-Multipath illustration, Overview of DM-Multipath active/passive configuration definition, Overview of DM-Multipath illustration, Overview of DM-Multipath alias parameter , Multipaths Device Configuration Attributes configuration file, Multipath Device Identifiers B bindings_file parameter, Configuration File Defaults blacklist configuration file, Configuration File Blacklist default devices, Blacklisting By Device Name device name, Blacklisting By Device Name in configuration file, Setting Up DM-Multipath WWID, Blacklisting By WWID bl_product parameter, Configuration File Devices C chkconfig command, Setting Up DM-Multipath configuration file alias parameter, Multipaths Device Configuration Attributes bindings_file parameter, Configuration File Defaults blacklist, Setting Up DM-Multipath , Configuration File Blacklist bl_product parameter, Configuration File Devices devnode_blacklist, Configuration File Blacklist failback parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices features parameter, Configuration File Defaults , Configuration File Devices flush_on_last_del parameter, Configuration File Defaults getuid_callout parameter, Configuration File Defaults , Configuration File Devices gid parameter, Configuration File Defaults hardware_handler parameter, Configuration File Devices max_fds parameter, Configuration File Devices mode parameter, Configuration File Defaults no_path_retry parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices overview, Configuration File Overview path_checker parameter, Configuration File Defaults , Configuration File Devices path_grouping_policy parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices path_selector parameter, Multipaths Device Configuration Attributes , Configuration File Devices polling-interval parameter, Configuration File Defaults prio_callout parameter, Configuration File Defaults , Configuration File Devices product parameter, Configuration File Devices rr_min_io parameter, Configuration File Defaults , Multipaths Device Configuration Attributes rr_weight parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices selector parameter, Configuration File Defaults udev_dir parameter, Configuration File Defaults uid parameter, Configuration File Defaults user_friendly_names parameter, Configuration File Defaults vendor parameter, Configuration File Devices wwid parameter, Multipaths Device Configuration Attributes configuring DM-Multipath, Setting Up DM-Multipath D defaults section multipath.conf file, Configuration File Defaults dev/mapper directory, Multipath Device Identifiers dev/mpath directory, Multipath Device Identifiers device name, Multipath Device Identifiers device-mapper-multipath package, Setting Up DM-Multipath devices adding, Adding Devices to the Multipathing Database , Configuration File Devices devices section multipath.conf file, Configuration File Devices devnode_blacklist configuration file, Configuration File Blacklist DM-Multipath and LVM, Multipath Devices in Logical Volumes components, DM-Multipath Components configuration file, The DM-Multipath Configuration File configuring, Setting Up DM-Multipath definition, Device Mapper Multipathing device name, Multipath Device Identifiers devices, Multipath Devices failover, Overview of DM-Multipath overview, Overview of DM-Multipath redundancy, Overview of DM-Multipath setup, Setting Up DM-Multipath setup, overview, DM-Multipath Setup Overview dm-multipath kernel module , DM-Multipath Components dm-n devices, Multipath Device Identifiers dmsetup command, determining device mapper entries, Determining Device Mapper Entries with the dmsetup Command E etc/multipath.conf package, Setting Up DM-Multipath F failback parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices failover, Overview of DM-Multipath features parameter, Configuration File Defaults , Configuration File Devices feedback, Feedback flush_on_last_del parameter, Configuration File Defaults G getuid_callout parameter, Configuration File Defaults , Configuration File Devices gid parameter, Configuration File Defaults H hardware_handler parameter, Configuration File Devices K kpartx command , DM-Multipath Components L local disks, ignoring, Ignoring Local Disks when Generating Multipath Devices LVM physical volumes multipath devices, Multipath Devices in Logical Volumes lvm.conf file , Multipath Devices in Logical Volumes M max_fds parameter, Configuration File Devices mode parameter, Configuration File Defaults modprobe command, Setting Up DM-Multipath multipath command , DM-Multipath Components , Setting Up DM-Multipath options, Multipath Command Options output, Multipath Command Output queries, Multipath Queries with multipath Command multipath devices, Multipath Devices logical volumes, Multipath Devices in Logical Volumes LVM physical volumes, Multipath Devices in Logical Volumes multipath.conf file, Storage Array Support , The DM-Multipath Configuration File defaults section, Configuration File Defaults devices section, Configuration File Devices multipaths section, Multipaths Device Configuration Attributes multipath.conf.annotated file, The DM-Multipath Configuration File multipath.conf.defaults file, Storage Array Support , The DM-Multipath Configuration File multipathd command, Troubleshooting with the multipathd Interactive Console interactive console, Troubleshooting with the multipathd Interactive Console multipathd daemon , DM-Multipath Components multipathd start command, Setting Up DM-Multipath multipaths section multipath.conf file, Multipaths Device Configuration Attributes N no_path_retry parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices P path_checker parameter, Configuration File Defaults , Configuration File Devices path_grouping_policy parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices path_selector parameter, Multipaths Device Configuration Attributes , Configuration File Devices polling_interval parameter, Configuration File Defaults prio_callout parameter, Configuration File Defaults , Configuration File Devices product parameter, Configuration File Devices R rr_min_io parameter, Configuration File Defaults , Multipaths Device Configuration Attributes rr_weight parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices S selector parameter, Configuration File Defaults setup DM-Multipath, Setting Up DM-Multipath storage array support, Storage Array Support storage arrays adding, Adding Devices to the Multipathing Database , Configuration File Devices U udev_dir parameter, Configuration File Defaults uid parameter, Configuration File Defaults user_friendly_names parameter , Multipath Device Identifiers , Configuration File Defaults V vendor parameter, Configuration File Devices W World Wide Identifier (WWID), Multipath Device Identifiers wwid parameter, Multipaths Device Configuration Attributes
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/ix01
Chapter 12. Installing a cluster on Azure in a restricted network
Chapter 12. Installing a cluster on Azure in a restricted network In OpenShift Container Platform version 4.15, you can install a cluster on Microsoft Azure in a restricted network by creating an internal mirror of the installation release content on an existing Azure Virtual Network (VNet). Important You can install an OpenShift Container Platform cluster by using mirrored installation release content, but your cluster requires internet access to use the Azure APIs. 12.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VNet in Azure. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VNet. You must use a user-provisioned VNet that satisfies one of the following requirements: The VNet contains the mirror registry The VNet has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 12.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 12.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 12.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Restricted cluster with Azure Firewall You can use Azure Firewall to restrict the outbound routing for the Virtual Network (VNet) that is used to install the OpenShift Container Platform cluster. For more information, see providing user-defined routing with Azure Firewall . You can create a OpenShift Container Platform cluster in a restricted network by using VNet with Azure Firewall and configuring the user-defined routing. Important If you are using Azure Firewall for restricting internet access, you must set the publish field to Internal in the install-config.yaml file. This is because Azure Firewall does not work properly with Azure public load balancers . 12.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.15, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 12.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 12.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 12.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x * Allows connections to Azure APIs. You must set a Destination Service Tag to AzureCloud . [1] x x * Denies connections to the internet. You must set a Destination Service Tag to Internet . [1] x x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 12.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 12.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 12.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 12.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 12.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 12.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VNet to install the cluster under the platform.azure field: networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4 1 Replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). 2 Replace <vnet> with the existing virtual network name. 3 Replace <control_plane_subnet> with the existing subnet name to deploy the control plane machines. 4 Replace <compute_subnet> with the existing subnet name to deploy compute machines. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Important Azure Firewall does not work seamlessly with Azure Public Load balancers. Thus, when using Azure Firewall for restricting internet access, the publish field in install-config.yaml should be set to Internal . Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 12.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 12.6.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 12.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 12.6.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 12.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 12.6.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 12.6.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 12.6.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev publish: Internal 26 1 10 14 21 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 If you use an existing VNet, specify the name of the resource group that contains it. 17 If you use an existing VNet, specify its name. 18 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 19 If you use an existing VNet, specify the name of the subnet to host the compute machines. 20 When using Azure Firewall to restrict Internet access, you must configure outbound routing to send traffic through the Azure Firewall. Configuring user-defined routing prevents exposing external endpoints in your cluster. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 Provide the contents of the certificate file that you used for your mirror registry. 25 Provide the imageContentSources section from the output of the command to mirror the repository. 26 How to publish the user-facing endpoints of your cluster. When using Azure Firewall to restrict Internet access, set publish to Internal to deploy a private cluster. The user-facing endpoints then cannot be accessed from the internet. The default value is External . 12.6.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 12.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 12.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 12.8.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 12.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 12.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 12.8.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 12.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 12.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 12.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev publish: Internal 26", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_azure/installing-restricted-networks-azure-installer-provisioned
Chapter 6. Storage classes and storage pools
Chapter 6. Storage classes and storage pools The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create a custom storage class if you want the storage class to have a different behavior. You can create multiple storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple storage classes and multiple pools are not supported for external mode OpenShift Data Foundation clusters. Note With a minimal cluster of a single device set, only two new storage classes can be created. Every storage cluster expansion allows two new additional storage classes. 6.1. Creating storage classes and pools You can create a storage class using an existing pool, or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Select an existing Storage Pool from the list, or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 6.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager Procedure In the OpenShift Web Console, navigate to Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. There are two options available to set the KMS connection details: Select existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select the Key Management Service Provider . If Vault is selected as the Key Management Service Provider , follow these steps: Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . If Thales CipherTrust Manager (using KMIP) is selected as the Key Management Service Provider , follow these steps: Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage Storage Classes . Click the Storage class name YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp .
[ "encryptionKMSID: 1-vault", "kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/storage-classes-and-storage-pools_osp