title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 9. Configuring a deployment
|
Chapter 9. Configuring a deployment Configure and manage a Streams for Apache Kafka deployment to your precise needs using Streams for Apache Kafka custom resources. Streams for Apache Kafka provides example custom resources with each release, allowing you to configure and create instances of supported Kafka components. Fine-tune your deployment by configuring custom resources to include additional features according to your specific requirements. For specific areas of configuration, namely metrics, logging, and external configuration for Kafka Connect connectors, you can also use ConfigMap resources. By using a ConfigMap resource to incorporate configuration, you centralize maintenance. You can also use configuration providers to load configuration from external sources, which we recommend for supplying the credentials for Kafka Connect connector configuration. Use custom resources to configure and create instances of the following components: Kafka clusters Kafka Connect clusters Kafka MirrorMaker Kafka Bridge Cruise Control You can also use custom resource configuration to manage your instances or modify your deployment to introduce additional features. This might include configuration that supports the following: Specifying node pools Securing client access to Kafka brokers Accessing Kafka brokers from outside the cluster Creating topics Creating users (clients) Controlling feature gates Changing logging frequency Allocating resource limits and requests Introducing features, such as Streams for Apache Kafka Drain Cleaner, Cruise Control, or distributed tracing. The Streams for Apache Kafka Custom Resource API Reference describes the properties you can use in your configuration. Note Labels applied to a custom resource are also applied to the OpenShift resources making up its cluster. This provides a convenient mechanism for resources to be labeled as required. Applying changes to a custom resource configuration file You add configuration to a custom resource using spec properties. After adding the configuration, you can use oc to apply the changes to a custom resource configuration file: oc apply -f <kafka_configuration_file> 9.1. Using example configuration files Further enhance your deployment by incorporating additional supported configuration. Example configuration files are provided with the downloadable release artifacts from the Streams for Apache Kafka software downloads page . The example files include only the essential properties and values for custom resources by default. You can download and apply the examples using the oc command-line tool. The examples can serve as a starting point when building your own Kafka component configuration for deployment. Note If you installed Streams for Apache Kafka using the Operator, you can still download the example files and use them to upload configuration. The release artifacts include an examples directory that contains the configuration examples. Example configuration files provided with Streams for Apache Kafka 1 KafkaUser custom resource configuration, which is managed by the User Operator. 2 KafkaTopic custom resource configuration, which is managed by Topic Operator. 3 Authentication and authorization configuration for Kafka components. Includes example configuration for TLS and SCRAM-SHA-512 authentication. The Red Hat Single Sign-On example includes Kafka custom resource configuration and a Red Hat Single Sign-On realm specification. You can use the example to try Red Hat Single Sign-On authorization services. There is also an example with enabled oauth authentication and keycloak authorization metrics. 4 Kafka custom resource configuration for a deployment of Mirror Maker. Includes example configuration for replication policy and synchronization frequency. 5 Metrics configuration , including Prometheus installation and Grafana dashboard files. 6 Kafka custom resource configuration for a deployment of Kafka. Includes example configuration for an ephemeral or persistent single or multi-node deployment. 7 KafkaNodePool configuration for Kafka nodes in a Kafka cluster. Includes example configuration for nodes in clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper. 8 Kafka custom resource with a deployment configuration for Cruise Control. Includes KafkaRebalance custom resources to generate optimization proposals from Cruise Control, with example configurations to use the default or user optimization goals. 9 KafkaConnect and KafkaConnector custom resource configuration for a deployment of Kafka Connect. Includes example configurations for a single or multi-node deployment. 10 KafkaBridge custom resource configuration for a deployment of Kafka Bridge. 9.2. Configuring Kafka Update the spec properties of the Kafka custom resource to configure your Kafka deployment. As well as configuring Kafka, you can add configuration for ZooKeeper and the Streams for Apache Kafka Operators. Common configuration properties, such as logging and healthchecks, are configured independently for each component. Configuration options that are particularly important include the following: Resource requests (CPU / Memory) JVM options for maximum and minimum memory allocation Listeners for connecting clients to Kafka brokers (and authentication of clients) Authentication Storage Rack awareness Metrics Cruise Control for cluster rebalancing Metadata version for KRaft-based Kafka clusters Inter-broker protocol version for ZooKeeper-based Kafka clusters The .spec.kafka.metadataVersion property or the inter.broker.protocol.version property in config must be a version supported by the specified Kafka version ( spec.kafka.version ). The property represents the Kafka metadata or inter-broker protocol version used in a Kafka cluster. If either of these properties is not set in the configuration, the Cluster Operator updates the version to the default for the Kafka version used. Note The oldest supported metadata version is 3.3. Using a metadata version that is older than the Kafka version might cause some features to be disabled. For a deeper understanding of the Kafka cluster configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . Managing TLS certificates When deploying Kafka, the Cluster Operator automatically sets up and renews TLS certificates to enable encryption and authentication within your cluster. If required, you can manually renew the cluster and clients CA certificates before their renewal period starts. You can also replace the keys used by the cluster and clients CA certificates. For more information, see Renewing CA certificates manually and Replacing private keys . Example Kafka custom resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 1 version: 3.7.0 2 logging: 3 type: inline loggers: kafka.root.logger.level: INFO resources: 4 requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" readinessProbe: 5 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 6 -Xms: 8192m -Xmx: 8192m image: my-org/my-image:latest 7 listeners: 8 - name: plain 9 port: 9092 10 type: internal 11 tls: false 12 configuration: useServiceDnsDomain: true 13 - name: tls port: 9093 type: internal tls: true authentication: 14 type: tls - name: external1 15 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 16 secretName: my-secret certificate: my-certificate.crt key: my-key.key authorization: 17 type: simple config: 18 auto.create.topics.enable: "false" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: "3.7" storage: 19 type: persistent-claim 20 size: 10000Gi rack: 21 topologyKey: topology.kubernetes.io/zone metricsConfig: 22 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 23 name: my-config-map key: my-key # ... zookeeper: 24 replicas: 3 25 logging: 26 type: inline loggers: zookeeper.root.logger: INFO resources: requests: memory: 8Gi cpu: "2" limits: memory: 8Gi cpu: "2" jvmOptions: -Xms: 4096m -Xmx: 4096m storage: type: persistent-claim size: 1000Gi metricsConfig: # ... entityOperator: 27 tlsSidecar: 28 resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 29 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 30 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" kafkaExporter: 31 # ... cruiseControl: 32 # ... 1 The number of replica nodes. 2 Kafka version, which can be changed to a supported version by following the upgrade procedure. 3 Kafka loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties key in the ConfigMap. For the Kafka kafka.root.logger.level logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 4 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 5 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 6 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka. 7 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 8 Listeners configure how clients connect to the Kafka cluster via bootstrap addresses. Listeners are configured as internal or external listeners for connection from inside or outside the OpenShift cluster. 9 Name to identify the listener. Must be unique within the Kafka cluster. 10 Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. 11 Listener type specified as internal or cluster-ip (to expose Kafka using per-broker ClusterIP services), or for external listeners, as route (OpenShift only), loadbalancer , nodeport or ingress (Kubernetes only). 12 Enables or disables TLS encryption for each listener. For route and ingress type listeners, TLS encryption must always be enabled by setting it to true . 13 Defines whether the fully-qualified DNS names including the cluster service suffix (usually .cluster.local ) are assigned. 14 Listener authentication mechanism specified as mTLS, SCRAM-SHA-512, or token-based OAuth 2.0. 15 External listener configuration specifies how the Kafka cluster is exposed outside OpenShift, such as through a route , loadbalancer or nodeport . 16 Optional configuration for a Kafka listener certificate managed by an external CA (certificate authority). The brokerCertChainAndKey specifies a Secret that contains a server certificate and a private key. You can configure Kafka listener certificates on any listener with enabled TLS encryption. 17 Authorization enables simple, OAUTH 2.0, or OPA authorization on the Kafka broker. Simple authorization uses the AclAuthorizer and StandardAuthorizer Kafka plugins. 18 Broker configuration. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by Streams for Apache Kafka. 19 Storage size for persistent volumes may be increased and additional volumes may be added to JBOD storage. 20 Persistent storage has additional configuration options, such as a storage id and class for dynamic volume provisioning. 21 Rack awareness configuration to spread replicas across different racks, data centers, or availability zones. The topologyKey must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standard topology.kubernetes.io/zone label. 22 Prometheus metrics enabled. In this example, metrics are configured for the Prometheus JMX Exporter (the default metrics exporter). 23 Rules for exporting metrics in Prometheus format to a Grafana dashboard through the Prometheus JMX Exporter, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . 24 ZooKeeper-specific configuration, which contains properties similar to the Kafka configuration. 25 The number of ZooKeeper nodes. ZooKeeper clusters or ensembles usually run with an odd number of nodes, typically three, five, or seven. The majority of nodes must be available in order to maintain an effective quorum. If the ZooKeeper cluster loses its quorum, it will stop responding to clients and the Kafka brokers will stop working. Having a stable and highly available ZooKeeper cluster is crucial for Streams for Apache Kafka. 26 ZooKeeper loggers and log levels. 27 Entity Operator configuration, which specifies the configuration for the Topic Operator and User Operator. 28 Entity Operator TLS sidecar configuration. Entity Operator uses the TLS sidecar for secure communication with ZooKeeper. 29 Specified Topic Operator loggers and log levels. This example uses inline logging. 30 Specified User Operator loggers and log levels. 31 Kafka Exporter configuration. Kafka Exporter is an optional component for extracting metrics data from Kafka brokers, in particular consumer lag data. For Kafka Exporter to be able to work properly, consumer groups need to be in use. 32 Optional configuration for Cruise Control, which is used to rebalance the Kafka cluster. 9.2.1. Setting limits on brokers using the Kafka Static Quota plugin Use the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. You enable the plugin and set limits by configuring the Kafka resource. You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers. You can set byte-rate thresholds for producer and consumer bandwidth. The total limit is distributed across all clients accessing the broker. For example, you can set a byte-rate threshold of 40 MBps for producers. If two producers are running, they are each limited to a throughput of 20 MBps. Storage quotas throttle Kafka disk storage limits between a soft limit and hard limit. The limits apply to all available disk space. Producers are slowed gradually between the soft and hard limit. The limits prevent disks filling up too quickly and exceeding their capacity. Full disks can lead to issues that are hard to rectify. The hard limit is the maximum storage limit. Note For JBOD storage, the limit applies across all disks. If a broker is using two 1 TB disks and the quota is 1.1 TB, one disk might fill and the other disk will be almost empty. Prerequisites The Cluster Operator that manages the Kafka cluster is running. Procedure Add the plugin properties to the config of the Kafka resource. The plugin properties are shown in this example configuration. Example Kafka Static Quota plugin configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6 1 Loads the Kafka Static Quota plugin. 2 Sets the producer byte-rate threshold. 1 MBps in this example. 3 Sets the consumer byte-rate threshold. 1 MBps in this example. 4 Sets the lower soft limit for storage. 400 GB in this example. 5 Sets the higher hard limit for storage. 500 GB in this example. 6 Sets the interval in seconds between checks on storage. 5 seconds in this example. You can set this to 0 to disable the check. Update the resource. oc apply -f <kafka_configuration_file> Additional resources KafkaUserQuotas schema reference 9.2.2. Default ZooKeeper configuration values When deploying ZooKeeper with Streams for Apache Kafka, some of the default configuration set by Streams for Apache Kafka differs from the standard ZooKeeper defaults. This is because Streams for Apache Kafka sets a number of ZooKeeper properties with values that are optimized for running ZooKeeper within an OpenShift environment. The default configuration for key ZooKeeper properties in Streams for Apache Kafka is as follows: Table 9.1. Default ZooKeeper Properties in Streams for Apache Kafka Property Default value Description tickTime 2000 The length of a single tick in milliseconds, which determines the length of a session timeout. initLimit 5 The maximum number of ticks that a follower is allowed to fall behind the leader in a ZooKeeper cluster. syncLimit 2 The maximum number of ticks that a follower is allowed to be out of sync with the leader in a ZooKeeper cluster. autopurge.purgeInterval 1 Enables the autopurge feature and sets the time interval in hours for purging the server-side ZooKeeper transaction log. admin.enableServer false Flag to disable the ZooKeeper admin server. The admin server is not used by Streams for Apache Kafka. Important Modifying these default values as zookeeper.config in the Kafka custom resource may impact the behavior and performance of your ZooKeeper cluster. 9.2.3. Deleting Kafka nodes using annotations This procedure describes how to delete an existing Kafka node by using an OpenShift annotation. Deleting a Kafka node consists of deleting both the Pod on which the Kafka broker is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically. Warning Deleting a PersistentVolumeClaim can cause permanent data loss and the availability of your cluster cannot be guaranteed. The following procedure should only be performed if you have encountered storage issues. Prerequisites A running Cluster Operator Procedure Find the name of the Pod that you want to delete. Kafka broker pods are named <cluster_name>-kafka-<index_number> , where <index_number> starts at zero and ends at the total number of replicas minus one. For example, my-cluster-kafka-0 . Use oc annotate to annotate the Pod resource in OpenShift: oc annotate pod <cluster_name>-kafka-<index_number> strimzi.io/delete-pod-and-pvc="true" Wait for the reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated. 9.2.4. Deleting ZooKeeper nodes using annotations This procedure describes how to delete an existing ZooKeeper node by using an OpenShift annotation. Deleting a ZooKeeper node consists of deleting both the Pod on which ZooKeeper is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically. Warning Deleting a PersistentVolumeClaim can cause permanent data loss and the availability of your cluster cannot be guaranteed. The following procedure should only be performed if you have encountered storage issues. Prerequisites A running Cluster Operator Procedure Find the name of the Pod that you want to delete. ZooKeeper pods are named <cluster_name>-zookeeper-<index_number> , where <index_number> starts at zero and ends at the total number of replicas minus one. For example, my-cluster-zookeeper-0 . Use oc annotate to annotate the Pod resource in OpenShift: oc annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/delete-pod-and-pvc="true" Wait for the reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated. 9.3. Configuring node pools Update the spec properties of the KafkaNodePool custom resource to configure a node pool deployment. A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. Each pool has its own unique configuration, which includes mandatory settings for the number of replicas, roles, and storage allocation. Optionally, you can also specify values for the following properties: resources to specify memory and cpu requests and limits template to specify custom configuration for pods and other OpenShift resources jvmOptions to specify custom JVM configuration for heap size, runtime and other options The Kafka resource represents the configuration for all nodes in the Kafka cluster. The KafkaNodePool resource represents the configuration for nodes only in the node pool. If a configuration property is not specified in KafkaNodePool , it is inherited from the Kafka resource. Configuration specified in the KafkaNodePool resource takes precedence if set in both resources. For example, if both the node pool and Kafka configuration includes jvmOptions , the values specified in the node pool configuration are used. When -Xmx: 1024m is set in KafkaNodePool.spec.jvmOptions and -Xms: 512m is set in Kafka.spec.kafka.jvmOptions , the node uses the value from its node pool configuration. Properties from Kafka and KafkaNodePool schemas are not combined. To clarify, if KafkaNodePool.spec.template includes only podSet.metadata.labels , and Kafka.spec.kafka.template includes podSet.metadata.annotations and pod.metadata.labels , the template values from the Kafka configuration are ignored since there is a template value in the node pool configuration. For a deeper understanding of the node pool configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . Example configuration for a node pool in a cluster using KRaft mode apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kraft-dual-role 1 labels: strimzi.io/cluster: my-cluster 2 spec: replicas: 3 3 roles: 4 - controller - broker storage: 5 type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: 6 requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" 1 Unique name for the node pool. 2 The Kafka cluster the node pool belongs to. A node pool can only belong to a single cluster. 3 Number of replicas for the nodes. 4 Roles for the nodes in the node pool. In this example, the nodes have dual roles as controllers and brokers. 5 Storage specification for the nodes. 6 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. Note The configuration for the Kafka resource must be suitable for KRaft mode. Currently, KRaft mode has a number of limitations . Example configuration for a node pool in a cluster using ZooKeeper apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker 1 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" 1 Roles for the nodes in the node pool, which can only be broker when using Kafka with ZooKeeper. 9.3.1. Assigning IDs to node pools for scaling operations This procedure describes how to use annotations for advanced node ID handling by the Cluster Operator when performing scaling operations on node pools. You specify the node IDs to use, rather than the Cluster Operator using the ID in sequence. Management of node IDs in this way gives greater control. To add a range of IDs, you assign the following annotations to the KafkaNodePool resource: strimzi.io/-node-ids to add a range of IDs that are used for new brokers strimzi.io/remove-node-ids to add a range of IDs for removing existing brokers You can specify an array of individual node IDs, ID ranges, or a combination of both. For example, you can specify the following range of IDs: [0, 1, 2, 10-20, 30] for scaling up the Kafka node pool. This format allows you to specify a combination of individual node IDs ( 0 , 1 , 2 , 30 ) as well as a range of IDs ( 10-20 ). In a typical scenario, you might specify a range of IDs for scaling up and a single node ID to remove a specific node when scaling down. In this procedure, we add the scaling annotations to node pools as follows: pool-a is assigned a range of IDs for scaling up pool-b is assigned a range of IDs for scaling down During the scaling operation, IDs are used as follows: Scale up picks up the lowest available ID in the range for the new node. Scale down removes the node with the highest available ID in the range. If there are gaps in the sequence of node IDs assigned in the node pool, the node to be added is assigned an ID that fills the gap. The annotations don't need to be updated after every scaling operation. Any unused IDs are still valid for the scaling event. The Cluster Operator allows you to specify a range of IDs in either ascending or descending order, so you can define them in the order the nodes are scaled. For example, when scaling up, you can specify a range such as [1000-1999] , and the new nodes are assigned the lowest IDs: 1000 , 1001 , 1002 , 1003 , and so on. Conversely, when scaling down, you can specify a range like [1999-1000] , ensuring that nodes with the highest IDs are removed: 1003 , 1002 , 1001 , 1000 , and so on. If you don't specify an ID range using the annotations, the Cluster Operator follows its default behavior for handling IDs during scaling operations. Node IDs start at 0 (zero) and run sequentially across the Kafka cluster. The lowest ID is assigned to a new node. Gaps to node IDs are filled across the cluster. This means that they might not run sequentially within a node pool. The default behavior for scaling up is to add the lowest available node ID across the cluster; and for scaling down, it is to remove the node in the node pool with the highest available node ID. The default approach is also applied if the assigned range of IDs is misformatted, the scaling up range runs out of IDs, or the scaling down range does not apply to any in-use nodes. Prerequisites The Cluster Operator must be deployed. (Optional) Use the reserved.broker-max.id configuration property to extend the allowable range for node IDs within your node pools. By default, Apache Kafka restricts node IDs to numbers ranging from 0 to 999. To use node ID values greater than 999, add the reserved.broker-max.id configuration property to the Kafka custom resource and specify the required maximum node ID value. In this example, the maximum node ID is set at 10000. Node IDs can then be assigned up to that value. Example configuration for the maximum node ID number apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: config: reserved.broker.max.id: 10000 # ... Procedure Annotate the node pool with the IDs to use when scaling up or scaling down, as shown in the following examples. IDs for scaling up are assigned to node pool pool-a : Assigning IDs for scaling up oc annotate kafkanodepool pool-a strimzi.io/-node-ids="[0,1,2,10-20,30]" The lowest available ID from this range is used when adding a node to pool-a . IDs for scaling down are assigned to node pool pool-b : Assigning IDs for scaling down oc annotate kafkanodepool pool-b strimzi.io/remove-node-ids="[60-50,9,8,7]" The highest available ID from this range is removed when scaling down pool-b . Note If you want to remove a specific node, you can assign a single node ID to the scaling down annotation: oc annotate kafkanodepool pool-b strimzi.io/remove-node-ids="[3]" . You can now scale the node pool. For more information, see the following: Section 9.3.3, "Adding nodes to a node pool" Section 9.3.4, "Removing nodes from a node pool" Section 9.3.5, "Moving nodes between node pools" On reconciliation, a warning is given if the annotations are misformatted. After you have performed the scaling operation, you can remove the annotation if it's no longer needed. Removing the annotation for scaling up oc annotate kafkanodepool pool-a strimzi.io/-node-ids- Removing the annotation for scaling down oc annotate kafkanodepool pool-b strimzi.io/remove-node-ids- 9.3.2. Impact on racks when moving nodes from node pools If rack awareness is enabled on a Kafka cluster, replicas can be spread across different racks, data centers, or availability zones. When moving nodes from node pools, consider the implications on the cluster topology, particularly regarding rack awareness. Removing specific pods from node pools, especially out of order, may break the cluster topology or cause an imbalance in distribution across racks. An imbalance can impact both the distribution of nodes themselves and the partition replicas within the cluster. An uneven distribution of nodes and partitions across racks can affect the performance and resilience of the Kafka cluster. Plan the removal of nodes strategically to maintain the required balance and resilience across racks. Use the strimzi.io/remove-node-ids annotation to move nodes with specific IDs with caution. Ensure that configuration to spread partition replicas across racks and for clients to consume from the closest replicas is not broken. Tip Use Cruise Control and the KafkaRebalance resource with the RackAwareGoal to make sure that replicas remain distributed across different racks. 9.3.3. Adding nodes to a node pool This procedure describes how to scale up a node pool to add new nodes. Currently, scale up is only possible for broker-only node pools containing nodes that run as dedicated brokers. In this procedure, we start with three nodes for node pool pool-a : Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 Node IDs are appended to the name of the node on creation. We add node my-cluster-pool-a-3 , which has a node ID of 3 . Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. (Optional) For scale up operations, you can specify the node IDs to use in the operation . If you have assigned a range of node IDs for the operation, the ID of the node being added is determined by the sequence of nodes given. If you have assigned a single node ID, a node is added with the specified ID. Otherwise, the lowest available node ID across the cluster is used. Procedure Create a new node in the node pool. For example, node pool pool-a has three replicas. We add a node by increasing the number of replicas: oc scale kafkanodepool pool-a --replicas=4 Check the status of the deployment and wait for the pods in the node pool to be created and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows four Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0 Reassign the partitions after increasing the number of nodes in the node pool. After scaling up a node pool, use the Cruise Control add-brokers mode to move partition replicas from existing brokers to the newly added brokers. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: add-brokers brokers: [3] We are reassigning partitions to node my-cluster-pool-a-3 . The reassignment can take some time depending on the number of topics and partitions in the cluster. 9.3.4. Removing nodes from a node pool This procedure describes how to scale down a node pool to remove nodes. Currently, scale down is only possible for broker-only node pools containing nodes that run as dedicated brokers. In this procedure, we start with four nodes for node pool pool-a : Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0 Node IDs are appended to the name of the node on creation. We remove node my-cluster-pool-a-3 , which has a node ID of 3 . Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. (Optional) For scale down operations, you can specify the node IDs to use in the operation . If you have assigned a range of node IDs for the operation, the ID of the node being removed is determined by the sequence of nodes given. If you have assigned a single node ID, the node with the specified ID is removed. Otherwise, the node with the highest available ID in the node pool is removed. Procedure Reassign the partitions before decreasing the number of nodes in the node pool. Before scaling down a node pool, use the Cruise Control remove-brokers mode to move partition replicas off the brokers that are going to be removed. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: remove-brokers brokers: [3] We are reassigning partitions from node my-cluster-pool-a-3 . The reassignment can take some time depending on the number of topics and partitions in the cluster. After the reassignment process is complete, and the node being removed has no live partitions, reduce the number of Kafka nodes in the node pool. For example, node pool pool-a has four replicas. We remove a node by decreasing the number of replicas: oc scale kafkanodepool pool-a --replicas=3 Output shows three Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-0 1/1 Running 0 my-cluster-pool-b-kafka-1 1/1 Running 0 my-cluster-pool-b-kafka-2 1/1 Running 0 9.3.5. Moving nodes between node pools This procedure describes how to move nodes between source and target Kafka node pools without downtime. You create a new node on the target node pool and reassign partitions to move data from the old node on the source node pool. When the replicas on the new node are in-sync, you can delete the old node. In this procedure, we start with two node pools: pool-a with three replicas is the target node pool pool-b with four replicas is the source node pool We scale up pool-a , and reassign partitions and scale down pool-b , which results in the following: pool-a with four replicas pool-b with three replicas Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. (Optional) For scale up and scale down operations, you can specify the range of node IDs to use . If you have assigned node IDs for the operation, the ID of the node being added or removed is determined by the sequence of nodes given. Otherwise, the lowest available node ID across the cluster is used when adding nodes; and the node with the highest available ID in the node pool is removed. Procedure Create a new node in the target node pool. For example, node pool pool-a has three replicas. We add a node by increasing the number of replicas: oc scale kafkanodepool pool-a --replicas=4 Check the status of the deployment and wait for the pods in the node pool to be created and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows four Kafka nodes in the source and target node pools NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0 my-cluster-pool-a-7 1/1 Running 0 my-cluster-pool-b-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 my-cluster-pool-b-6 1/1 Running 0 Node IDs are appended to the name of the node on creation. We add node my-cluster-pool-a-7 , which has a node ID of 7 . Reassign the partitions from the old node to the new node. Before scaling down the source node pool, use the Cruise Control remove-brokers mode to move partition replicas off the brokers that are going to be removed. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: remove-brokers brokers: [6] We are reassigning partitions from node my-cluster-pool-b-6 . The reassignment can take some time depending on the number of topics and partitions in the cluster. After the reassignment process is complete, reduce the number of Kafka nodes in the source node pool. For example, node pool pool-b has four replicas. We remove a node by decreasing the number of replicas: oc scale kafkanodepool pool-b --replicas=3 The node with the highest ID ( 6 ) within the pool is removed. Output shows three Kafka nodes in the source node pool NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-2 1/1 Running 0 my-cluster-pool-b-kafka-3 1/1 Running 0 my-cluster-pool-b-kafka-5 1/1 Running 0 9.3.6. Changing node pool roles Node pools can be used with Kafka clusters that operate in KRaft mode (using Kafka Raft metadata) or use ZooKeeper for metadata management. If you are using KRaft mode, you can specify roles for all nodes in the node pool to operate as brokers, controllers, or both. If you are using ZooKeeper, nodes must be set as brokers only. In certain circumstances you might want to change the roles assigned to a node pool. For example, you may have a node pool that contains nodes that perform dual broker and controller roles, and then decide to split the roles between two node pools. In this case, you create a new node pool with nodes that act only as brokers, and then reassign partitions from the dual-role nodes to the new brokers. You can then switch the old node pool to a controller-only role. You can also perform the reverse operation by moving from node pools with controller-only and broker-only roles to a node pool that contains nodes that perform dual broker and controller roles. In this case, you add the broker role to the existing controller-only node pool, reassign partitions from the broker-only nodes to the dual-role nodes, and then delete the broker-only node pool. When removing broker roles in the node pool configuration, keep in mind that Kafka does not automatically reassign partitions. Before removing the broker role, ensure that nodes changing to controller-only roles do not have any assigned partitions. If partitions are assigned, the change is prevented. No replicas must be left on the node before removing the broker role. The best way to reassign partitions before changing roles is to apply a Cruise Control optimization proposal in remove-brokers mode. For more information, see Section 19.6, "Generating optimization proposals" . 9.3.7. Transitioning to separate broker and controller roles This procedure describes how to transition to using node pools with separate roles. If your Kafka cluster is using a node pool with combined controller and broker roles, you can transition to using two node pools with separate roles. To do this, rebalance the cluster to move partition replicas to a node pool with a broker-only role, and then switch the old node pool to a controller-only role. In this procedure, we start with node pool pool-a , which has controller and broker roles: Dual-role node pool apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false # ... The node pool has three nodes: Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 Each node performs a combined role of broker and controller. We create a second node pool called pool-b , with three nodes that act as brokers only. Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. Procedure Create a node pool with a broker role. Example node pool configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... The new node pool also has three nodes. If you already have a broker-only node pool, you can skip this step. Apply the new KafkaNodePool resource to create the brokers. Check the status of the deployment and wait for the pods in the node pool to be created and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows pods running in two node pools NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 Node IDs are appended to the name of the node on creation. Use the Cruise Control remove-brokers mode to reassign partition replicas from the dual-role nodes to the newly added brokers. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: remove-brokers brokers: [0, 1, 2] The reassignment can take some time depending on the number of topics and partitions in the cluster. Note If nodes changing to controller-only roles have any assigned partitions, the change is prevented. The status.conditions of the Kafka resource provide details of events preventing the change. Remove the broker role from the node pool that originally had a combined role. Dual-role nodes switched to controllers apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false # ... Apply the configuration change so that the node pool switches to a controller-only role. 9.3.8. Transitioning to dual-role nodes This procedure describes how to transition from separate node pools with broker-only and controller-only roles to using a dual-role node pool. If your Kafka cluster is using node pools with dedicated controller and broker nodes, you can transition to using a single node pool with both roles. To do this, add the broker role to the controller-only node pool, rebalance the cluster to move partition replicas to the dual-role node pool, and then delete the old broker-only node pool. In this procedure, we start with two node pools pool-a , which has only the controller role and pool-b which has only the broker role: Single role node pools apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... --- apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... The Kafka cluster has six nodes: Kafka nodes in the node pools NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 The pool-a nodes perform the role of controller. The pool-b nodes perform the role of broker. Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. Procedure Edit the node pool pool-a and add the broker role to it. Example node pool configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... Check the status and wait for the pods in the node pool to be restarted and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows pods running in two node pools NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 Node IDs are appended to the name of the node on creation. Use the Cruise Control remove-brokers mode to reassign partition replicas from the broker-only nodes to the dual-role nodes. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: remove-brokers brokers: [3, 4, 5] The reassignment can take some time depending on the number of topics and partitions in the cluster. Remove the pool-b node pool that has the old broker-only nodes. oc delete kafkanodepool pool-b -n <my_cluster_operator_namespace> 9.3.9. Managing storage using node pools Storage management in Streams for Apache Kafka is usually straightforward, and requires little change when set up, but there might be situations where you need to modify your storage configurations. Node pools simplify this process, because you can set up separate node pools that specify your new storage requirements. In this procedure we create and manage storage for a node pool called pool-a containing three nodes. We show how to change the storage class ( volumes.class ) that defines the type of persistent storage it uses. You can use the same steps to change the storage size ( volumes.size ). Note We strongly recommend using block storage. Streams for Apache Kafka is only tested for use with block storage. Prerequisites The Cluster Operator must be deployed. Cruise Control is deployed with Kafka. For storage that uses persistent volume claims for dynamic volume allocation, storage classes are defined and available in the OpenShift cluster that correspond to the storage solutions you need. Procedure Create the node pool with its own storage settings. For example, node pool pool-a uses JBOD storage with persistent volumes: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: gp2-ebs # ... Nodes in pool-a are configured to use Amazon EBS (Elastic Block Store) GP2 volumes. Apply the node pool configuration for pool-a . Check the status of the deployment and wait for the pods in pool-a to be created and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows three Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 To migrate to a new storage class, create a new node pool with the required storage configuration: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: roles: - broker replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 1Ti class: gp3-ebs # ... Nodes in pool-b are configured to use Amazon EBS (Elastic Block Store) GP3 volumes. Apply the node pool configuration for pool-b . Check the status of the deployment and wait for the pods in pool-b to be created and ready. Reassign the partitions from pool-a to pool-b . When migrating to a new storage configuration, use the Cruise Control remove-brokers mode to move partition replicas off the brokers that are going to be removed. Using Cruise Control to reassign partition replicas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # ... spec: mode: remove-brokers brokers: [0, 1, 2] We are reassigning partitions from pool-a . The reassignment can take some time depending on the number of topics and partitions in the cluster. After the reassignment process is complete, delete the old node pool: oc delete kafkanodepool pool-a 9.3.10. Managing storage affinity using node pools In situations where storage resources, such as local persistent volumes, are constrained to specific worker nodes, or availability zones, configuring storage affinity helps to schedule pods to use the right nodes. Node pools allow you to configure affinity independently. In this procedure, we create and manage storage affinity for two availability zones: zone-1 and zone-2 . You can configure node pools for separate availability zones, but use the same storage class. We define an all-zones persistent storage class representing the storage resources available in each zone. We also use the .spec.template.pod properties to configure the node affinity and schedule Kafka pods on zone-1 and zone-2 worker nodes. The storage class and affinity is specified in node pools representing the nodes in each availability zone: pool-zone-1 pool-zone-2 . Prerequisites The Cluster Operator must be deployed. If you are not familiar with the concepts of affinity, see the Kubernetes node and pod affinity documentation . Procedure Define the storage class for use with each availability zone: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: all-zones provisioner: kubernetes.io/my-storage parameters: type: ssd volumeBindingMode: WaitForFirstConsumer Create node pools representing the two availability zones, specifying the all-zones storage class and the affinity for each zone: Node pool configuration for zone-1 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-1 labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-1 # ... Node pool configuration for zone-2 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-2 labels: strimzi.io/cluster: my-cluster spec: replicas: 4 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-2 # ... Apply the node pool configuration. Check the status of the deployment and wait for the pods in the node pools to be created and ready ( 1/1 ). oc get pods -n <my_cluster_operator_namespace> Output shows 3 Kafka nodes in pool-zone-1 and 4 Kafka nodes in pool-zone-2 NAME READY STATUS RESTARTS my-cluster-pool-zone-1-kafka-0 1/1 Running 0 my-cluster-pool-zone-1-kafka-1 1/1 Running 0 my-cluster-pool-zone-1-kafka-2 1/1 Running 0 my-cluster-pool-zone-2-kafka-3 1/1 Running 0 my-cluster-pool-zone-2-kafka-4 1/1 Running 0 my-cluster-pool-zone-2-kafka-5 1/1 Running 0 my-cluster-pool-zone-2-kafka-6 1/1 Running 0 9.3.11. Migrating existing Kafka clusters to use Kafka node pools This procedure describes how to migrate existing Kafka clusters to use Kafka node pools. After you have updated the Kafka cluster, you can use the node pools to manage the configuration of nodes within each pool. Note Currently, replica and storage configuration in the KafkaNodePool resource must also be present in the Kafka resource. The configuration is ignored when node pools are being used. Prerequisites The Cluster Operator must be deployed. Procedure Create a new KafkaNodePool resource. Name the resource kafka . Point a strimzi.io/cluster label to your existing Kafka resource. Set the replica count and storage configuration to match your current Kafka cluster. Set the roles to broker . Example configuration for a node pool used in migrating a Kafka cluster apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kafka labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false Warning To preserve cluster data and the names of its nodes and resources, the node pool name must be kafka , and the strimzi.io/cluster label must match the Kafka resource name. Otherwise, nodes and resources are created with new names, including the persistent volume storage used by the nodes. Consequently, your data may not be available. Apply the KafkaNodePool resource: oc apply -f <node_pool_configuration_file> By applying this resource, you switch Kafka to using node pools. There is no change or rolling update and resources are identical to how they were before. Enable support for node pools in the Kafka resource using the strimzi.io/node-pools: enabled annotation. Example configuration for a node pool in a cluster using ZooKeeper apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled spec: kafka: # ... zookeeper: # ... Apply the Kafka resource: oc apply -f <kafka_configuration_file> There is no change or rolling update. The resources remain identical to how they were before. Remove the replicated properties from the Kafka custom resource. When the KafkaNodePool resource is in use, you can remove the properties that you copied to the KafkaNodePool resource, such as the .spec.kafka.replicas and .spec.kafka.storage properties. Reversing the migration To revert to managing Kafka nodes using only Kafka custom resources: If you have multiple node pools, consolidate them into a single KafkaNodePool named kafka with node IDs from 0 to N (where N is the number of replicas). Ensure that the .spec.kafka configuration in the Kafka resource matches the KafkaNodePool configuration, including storage, resources, and replicas. Disable support for node pools in the Kafka resource using the strimzi.io/node-pools: disabled annotation. Delete the Kafka node pool named kafka . 9.4. Configuring the Entity Operator Use the entityOperator property in Kafka.spec to configure the Entity Operator. The Entity Operator is responsible for managing Kafka-related entities in a running Kafka cluster. It comprises the following operators: Topic Operator to manage Kafka topics User Operator to manage Kafka users By configuring the Kafka resource, the Cluster Operator can deploy the Entity Operator, including one or both operators. Once deployed, the operators are automatically configured to handle the topics and users of the Kafka cluster. Each operator can only monitor a single namespace. For more information, see Section 1.2.1, "Watching Streams for Apache Kafka resources in OpenShift namespaces" . The entityOperator property supports several sub-properties: tlsSidecar topicOperator userOperator template The tlsSidecar property contains the configuration of the TLS sidecar container, which is used to communicate with ZooKeeper. The template property contains the configuration of the Entity Operator pod, such as labels, annotations, affinity, and tolerations. For more information on configuring templates, see Section 9.16, "Customizing OpenShift resources" . The topicOperator property contains the configuration of the Topic Operator. When this option is missing, the Entity Operator is deployed without the Topic Operator. The userOperator property contains the configuration of the User Operator. When this option is missing, the Entity Operator is deployed without the User Operator. For more information on the properties used to configure the Entity Operator, see the EntityOperatorSpec schema reference . Example of basic configuration enabling both operators apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {} If an empty object ( {} ) is used for the topicOperator and userOperator , all properties use their default values. When both topicOperator and userOperator properties are missing, the Entity Operator is not deployed. 9.4.1. Configuring the Topic Operator Use topicOperator properties in Kafka.spec.entityOperator to configure the Topic Operator. Note If you are using unidirectional topic management, which is enabled by default, the following properties are not used and are ignored: Kafka.spec.entityOperator.topicOperator.zookeeperSessionTimeoutSeconds and Kafka.spec.entityOperator.topicOperator.topicMetadataMaxAttempts . For more information on unidirectional topic management, refer to Section 10.1, "Topic management modes" . The following properties are supported: watchedNamespace The OpenShift namespace in which the Topic Operator watches for KafkaTopic resources. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds The interval between periodic reconciliations in seconds. Default 120 . zookeeperSessionTimeoutSeconds The ZooKeeper session timeout in seconds. Default 18 . topicMetadataMaxAttempts The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential back-off. Consider increasing this value when topic creation might take more time due to the number of partitions or replicas. Default 6 . image The image property can be used to configure the container image which is used. To learn more, refer to the information provided on configuring the image property` . resources The resources property configures the amount of resources allocated to the Topic Operator. You can specify requests and limits for memory and cpu resources. The requests should be enough to ensure a stable performance of the operator. logging The logging property configures the logging of the Topic Operator. To learn more, refer to the information provided on Topic Operator logging . Example Topic Operator configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi # ... 9.4.2. Configuring the User Operator Use userOperator properties in Kafka.spec.entityOperator to configure the User Operator. The following properties are supported: watchedNamespace The OpenShift namespace in which the User Operator watches for KafkaUser resources. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds The interval between periodic reconciliations in seconds. Default 120 . image The image property can be used to configure the container image which will be used. To learn more, refer to the information provided on configuring the image property` . resources The resources property configures the amount of resources allocated to the User Operator. You can specify requests and limits for memory and cpu resources. The requests should be enough to ensure a stable performance of the operator. logging The logging property configures the logging of the User Operator. To learn more, refer to the information provided on User Operator logging . secretPrefix The secretPrefix property adds a prefix to the name of all Secrets created from the KafkaUser resource. For example, secretPrefix: kafka- would prefix all Secret names with kafka- . So a KafkaUser named my-user would create a Secret named kafka-my-user . Example User Operator configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi # ... 9.5. Configuring the Cluster Operator Use environment variables to configure the Cluster Operator. Specify the environment variables for the container image of the Cluster Operator in its Deployment configuration file. You can use the following environment variables to configure the Cluster Operator. If you are running Cluster Operator replicas in standby mode, there are additional environment variables for enabling leader election . Kafka, Kafka Connect, and Kafka MirrorMaker support multiple versions. Use their STRIMZI_<COMPONENT_NAME>_IMAGES environment variables to configure the default container images used for each version. The configuration provides a mapping between a version and an image. The required syntax is whitespace or comma-separated <version> = <image> pairs, which determine the image to use for a given version. For example, 3.7.0=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0. Theses default images are overridden if image property values are specified in the configuration of a component. For more information on image configuration of components, see the Streams for Apache Kafka Custom Resource API Reference . Note The Deployment configuration file provided with the Streams for Apache Kafka release artifacts is install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml . STRIMZI_NAMESPACE A comma-separated list of namespaces that the operator operates in. When not set, set to empty string, or set to * , the Cluster Operator operates in all namespaces. The Cluster Operator deployment might use the downward API to set this automatically to the namespace the Cluster Operator is deployed in. Example configuration for Cluster Operator namespaces env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_FULL_RECONCILIATION_INTERVAL_MS Optional, default is 120000 ms. The interval between periodic reconciliations , in milliseconds. STRIMZI_OPERATION_TIMEOUT_MS Optional, default 300000 ms. The timeout for internal operations, in milliseconds. Increase this value when using Streams for Apache Kafka on clusters where regular OpenShift operations take longer than usual (due to factors such as prolonged download times for container images, for example). STRIMZI_ZOOKEEPER_ADMIN_SESSION_TIMEOUT_MS Optional, default 10000 ms. The session timeout for the Cluster Operator's ZooKeeper admin client, in milliseconds. Increase the value if ZooKeeper requests from the Cluster Operator are regularly failing due to timeout issues. There is a maximum allowed session time set on the ZooKeeper server side via the maxSessionTimeout config. By default, the maximum session timeout value is 20 times the default tickTime (whose default is 2000) at 40000 ms. If you require a higher timeout, change the maxSessionTimeout ZooKeeper server configuration value. STRIMZI_OPERATIONS_THREAD_POOL_SIZE Optional, default 10. The worker thread pool size, which is used for various asynchronous and blocking operations that are run by the Cluster Operator. STRIMZI_OPERATOR_NAME Optional, defaults to the pod's hostname. The operator name identifies the Streams for Apache Kafka instance when emitting OpenShift events . STRIMZI_OPERATOR_NAMESPACE The name of the namespace where the Cluster Operator is running. Do not configure this variable manually. Use the downward API. env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_OPERATOR_NAMESPACE_LABELS Optional. The labels of the namespace where the Streams for Apache Kafka Cluster Operator is running. Use namespace labels to configure the namespace selector in network policies . Network policies allow the Streams for Apache Kafka Cluster Operator access only to the operands from the namespace with these labels. When not set, the namespace selector in network policies is configured to allow access to the Cluster Operator from any namespace in the OpenShift cluster. env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 STRIMZI_LABELS_EXCLUSION_PATTERN Optional, default regex pattern is ^app.kubernetes.io/(?!part-of).* . The regex exclusion pattern used to filter labels propagation from the main custom resource to its subresources. The labels exclusion filter is not applied to labels in template sections such as spec.kafka.template.pod.metadata.labels . env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: "^key1.*" STRIMZI_CUSTOM_<COMPONENT_NAME>_LABELS Optional. One or more custom labels to apply to all the pods created by the custom resource of the component. The Cluster Operator labels the pods when the custom resource is created or is reconciled. Labels can be applied to the following components: KAFKA KAFKA_CONNECT KAFKA_CONNECT_BUILD ZOOKEEPER ENTITY_OPERATOR KAFKA_MIRROR_MAKER2 KAFKA_MIRROR_MAKER CRUISE_CONTROL KAFKA_BRIDGE KAFKA_EXPORTER STRIMZI_CUSTOM_RESOURCE_SELECTOR Optional. The label selector to filter the custom resources handled by the Cluster Operator. The operator will operate only on those custom resources that have the specified labels set. Resources without these labels will not be seen by the operator. The label selector applies to Kafka , KafkaConnect , KafkaBridge , KafkaMirrorMaker , and KafkaMirrorMaker2 resources. KafkaRebalance and KafkaConnector resources are operated only when their corresponding Kafka and Kafka Connect clusters have the matching labels. env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2 STRIMZI_KAFKA_IMAGES Required. The mapping from the Kafka version to the corresponding image containing a Kafka broker for that version. For example 3.6.0=registry.redhat.io/amq-streams/kafka-36-rhel9:2.7.0, 3.7.0=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 . STRIMZI_KAFKA_CONNECT_IMAGES Required. The mapping from the Kafka version to the corresponding image of Kafka Connect for that version. For example 3.6.0=registry.redhat.io/amq-streams/kafka-36-rhel9:2.7.0, 3.7.0=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 . STRIMZI_KAFKA_MIRROR_MAKER2_IMAGES Required. The mapping from the Kafka version to the corresponding image of MirrorMaker 2 for that version. For example 3.6.0=registry.redhat.io/amq-streams/kafka-36-rhel9:2.7.0, 3.7.0=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 . (Deprecated) STRIMZI_KAFKA_MIRROR_MAKER_IMAGES Required. The mapping from the Kafka version to the corresponding image of MirrorMaker for that version. For example 3.6.0=registry.redhat.io/amq-streams/kafka-36-rhel9:2.7.0, 3.7.0=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 . STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE Optional. The default is registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 . The image name to use as the default when deploying the Topic Operator if no image is specified as the Kafka.spec.entityOperator.topicOperator.image in the Kafka resource. STRIMZI_DEFAULT_USER_OPERATOR_IMAGE Optional. The default is registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 . The image name to use as the default when deploying the User Operator if no image is specified as the Kafka.spec.entityOperator.userOperator.image in the Kafka resource. STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE Optional. The default is registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 . The image name to use as the default when deploying the sidecar container for the Entity Operator if no image is specified as the Kafka.spec.entityOperator.tlsSidecar.image in the Kafka resource. The sidecar provides TLS support. STRIMZI_DEFAULT_KAFKA_EXPORTER_IMAGE Optional. The default is registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 . The image name to use as the default when deploying the Kafka Exporter if no image is specified as the Kafka.spec.kafkaExporter.image in the Kafka resource. STRIMZI_DEFAULT_CRUISE_CONTROL_IMAGE Optional. The default is registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 . The image name to use as the default when deploying Cruise Control if no image is specified as the Kafka.spec.cruiseControl.image in the Kafka resource. STRIMZI_DEFAULT_KAFKA_BRIDGE_IMAGE Optional. The default is registry.redhat.io/amq-streams/bridge-rhel9:2.7.0 . The image name to use as the default when deploying the Kafka Bridge if no image is specified as the Kafka.spec.kafkaBridge.image in the Kafka resource. STRIMZI_DEFAULT_KAFKA_INIT_IMAGE Optional. The default is registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 . The image name to use as the default for the Kafka initializer container if no image is specified in the brokerRackInitImage of the Kafka resource or the clientRackInitImage of the Kafka Connect resource. The init container is started before the Kafka cluster for initial configuration work, such as rack support. STRIMZI_IMAGE_PULL_POLICY Optional. The ImagePullPolicy that is applied to containers in all pods managed by the Cluster Operator. The valid values are Always , IfNotPresent , and Never . If not specified, the OpenShift defaults are used. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_IMAGE_PULL_SECRETS Optional. A comma-separated list of Secret names. The secrets referenced here contain the credentials to the container registries where the container images are pulled from. The secrets are specified in the imagePullSecrets property for all pods created by the Cluster Operator. Changing this list results in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_KUBERNETES_VERSION Optional. Overrides the OpenShift version information detected from the API server. Example configuration for OpenShift version override env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64 KUBERNETES_SERVICE_DNS_DOMAIN Optional. Overrides the default OpenShift DNS domain name suffix. By default, services assigned in the OpenShift cluster have a DNS domain name that uses the default suffix cluster.local . For example, for broker kafka-0 : <cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local The DNS domain name is added to the Kafka broker certificates used for hostname verification. If you are using a different DNS domain name suffix in your cluster, change the KUBERNETES_SERVICE_DNS_DOMAIN environment variable from the default to the one you are using in order to establish a connection with the Kafka brokers. STRIMZI_CONNECT_BUILD_TIMEOUT_MS Optional, default 300000 ms. The timeout for building new Kafka Connect images with additional connectors, in milliseconds. Consider increasing this value when using Streams for Apache Kafka to build container images containing many connectors or using a slow container registry. STRIMZI_NETWORK_POLICY_GENERATION Optional, default true . Network policy for resources. Network policies allow connections between Kafka components. Set this environment variable to false to disable network policy generation. You might do this, for example, if you want to use custom network policies. Custom network policies allow more control over maintaining the connections between components. STRIMZI_DNS_CACHE_TTL Optional, default 30 . Number of seconds to cache successful name lookups in local DNS resolver. Any negative value means cache forever. Zero means do not cache, which can be useful for avoiding connection errors due to long caching policies being applied. STRIMZI_POD_SET_RECONCILIATION_ONLY Optional, default false . When set to true , the Cluster Operator reconciles only the StrimziPodSet resources and any changes to the other custom resources ( Kafka , KafkaConnect , and so on) are ignored. This mode is useful for ensuring that your pods are recreated if needed, but no other changes happen to the clusters. STRIMZI_FEATURE_GATES Optional. Enables or disables the features and functionality controlled by feature gates . STRIMZI_POD_SECURITY_PROVIDER_CLASS Optional. Configuration for the pluggable PodSecurityProvider class, which can be used to provide the security context configuration for Pods and containers. 9.5.1. Restricting access to the Cluster Operator using network policy Use the STRIMZI_OPERATOR_NAMESPACE_LABELS environment variable to establish network policy for the Cluster Operator using namespace labels. The Cluster Operator can run in the same namespace as the resources it manages, or in a separate namespace. By default, the STRIMZI_OPERATOR_NAMESPACE environment variable is configured to use the downward API to find the namespace the Cluster Operator is running in. If the Cluster Operator is running in the same namespace as the resources, only local access is required and allowed by Streams for Apache Kafka. If the Cluster Operator is running in a separate namespace to the resources it manages, any namespace in the OpenShift cluster is allowed access to the Cluster Operator unless network policy is configured. By adding namespace labels, access to the Cluster Operator is restricted to the namespaces specified. Network policy configured for the Cluster Operator deployment #... env: # ... - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #... 9.5.2. Setting periodic reconciliation of custom resources Use the STRIMZI_FULL_RECONCILIATION_INTERVAL_MS variable to set the time interval for periodic reconciliations by the Cluster Operator. Replace its value with the required interval in milliseconds. Reconciliation period configured for the Cluster Operator deployment #... env: # ... - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: "120000" #... The Cluster Operator reacts to all notifications about applicable cluster resources received from the OpenShift cluster. If the operator is not running, or if a notification is not received for any reason, resources will get out of sync with the state of the running OpenShift cluster. In order to handle failovers properly, a periodic reconciliation process is executed by the Cluster Operator so that it can compare the state of the resources with the current cluster deployments in order to have a consistent state across all of them. Additional resources Downward API 9.5.3. Pausing reconciliation of custom resources using annotations Sometimes it is useful to pause the reconciliation of custom resources managed by Streams for Apache Kafka operators, so that you can perform fixes or make updates. If reconciliations are paused, any changes made to custom resources are ignored by the operators until the pause ends. If you want to pause reconciliation of a custom resource, set the strimzi.io/pause-reconciliation annotation to true in its configuration. This instructs the appropriate operator to pause reconciliation of the custom resource. For example, you can apply the annotation to the KafkaConnect resource so that reconciliation by the Cluster Operator is paused. You can also create a custom resource with the pause annotation enabled. The custom resource is created, but it is ignored. Prerequisites The Streams for Apache Kafka Operator that manages the custom resource is running. Procedure Annotate the custom resource in OpenShift, setting pause-reconciliation to true : oc annotate <kind_of_custom_resource> <name_of_custom_resource> strimzi.io/pause-reconciliation="true" For example, for the KafkaConnect custom resource: oc annotate KafkaConnect my-connect strimzi.io/pause-reconciliation="true" Check that the status conditions of the custom resource show a change to ReconciliationPaused : oc describe <kind_of_custom_resource> <name_of_custom_resource> The type condition changes to ReconciliationPaused at the lastTransitionTime . Example custom resource with a paused reconciliation condition type apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: "true" strimzi.io/use-connector-resources: "true" creationTimestamp: 2021-03-12T10:47:11Z #... spec: # ... status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: "True" type: ReconciliationPaused Resuming from pause To resume reconciliation, you can set the annotation to false , or remove the annotation. Additional resources Finding the status of a custom resource 9.5.4. Running multiple Cluster Operator replicas with leader election The default Cluster Operator configuration enables leader election to run multiple parallel replicas of the Cluster Operator. One replica is elected as the active leader and operates the deployed resources. The other replicas run in standby mode. When the leader stops or fails, one of the standby replicas is elected as the new leader and starts operating the deployed resources. By default, Streams for Apache Kafka runs with a single Cluster Operator replica that is always the leader replica. When a single Cluster Operator replica stops or fails, OpenShift starts a new replica. Running the Cluster Operator with multiple replicas is not essential. But it's useful to have replicas on standby in case of large-scale disruptions caused by major failure. For example, suppose multiple worker nodes or an entire availability zone fails. This failure might cause the Cluster Operator pod and many Kafka pods to go down at the same time. If subsequent pod scheduling causes congestion through lack of resources, this can delay operations when running a single Cluster Operator. 9.5.4.1. Enabling leader election for Cluster Operator replicas Configure leader election environment variables when running additional Cluster Operator replicas. The following environment variables are supported: STRIMZI_LEADER_ELECTION_ENABLED Optional, disabled ( false ) by default. Enables or disables leader election, which allows additional Cluster Operator replicas to run on standby. Note Leader election is disabled by default. It is only enabled when applying this environment variable on installation. STRIMZI_LEADER_ELECTION_LEASE_NAME Required when leader election is enabled. The name of the OpenShift Lease resource that is used for the leader election. STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE Required when leader election is enabled. The namespace where the OpenShift Lease resource used for leader election is created. You can use the downward API to configure it to the namespace where the Cluster Operator is deployed. env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_LEADER_ELECTION_IDENTITY Required when leader election is enabled. Configures the identity of a given Cluster Operator instance used during the leader election. The identity must be unique for each operator instance. You can use the downward API to configure it to the name of the pod where the Cluster Operator is deployed. env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name STRIMZI_LEADER_ELECTION_LEASE_DURATION_MS Optional, default 15000 ms. Specifies the duration the acquired lease is valid. STRIMZI_LEADER_ELECTION_RENEW_DEADLINE_MS Optional, default 10000 ms. Specifies the period the leader should try to maintain leadership. STRIMZI_LEADER_ELECTION_RETRY_PERIOD_MS Optional, default 2000 ms. Specifies the frequency of updates to the lease lock by the leader. 9.5.4.2. Configuring Cluster Operator replicas To run additional Cluster Operator replicas in standby mode, you will need to increase the number of replicas and enable leader election. To configure leader election, use the leader election environment variables. To make the required changes, configure the following Cluster Operator installation files located in install/cluster-operator/ : 060-Deployment-strimzi-cluster-operator.yaml 022-ClusterRole-strimzi-cluster-operator-role.yaml 022-RoleBinding-strimzi-cluster-operator.yaml Leader election has its own ClusterRole and RoleBinding RBAC resources that target the namespace where the Cluster Operator is running, rather than the namespace it is watching. The default deployment configuration creates a Lease resource called strimzi-cluster-operator in the same namespace as the Cluster Operator. The Cluster Operator uses leases to manage leader election. The RBAC resources provide the permissions to use the Lease resource. If you use a different Lease name or namespace, update the ClusterRole and RoleBinding files accordingly. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure Edit the Deployment resource that is used to deploy the Cluster Operator, which is defined in the 060-Deployment-strimzi-cluster-operator.yaml file. Change the replicas property from the default (1) to a value that matches the required number of replicas. Increasing the number of Cluster Operator replicas apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 3 Check that the leader election env properties are set. If they are not set, configure them. To enable leader election, STRIMZI_LEADER_ELECTION_ENABLED must be set to true (default). In this example, the name of the lease is changed to my-strimzi-cluster-operator . Configuring leader election environment variables for the Cluster Operator # ... spec containers: - name: strimzi-cluster-operator # ... env: - name: STRIMZI_LEADER_ELECTION_ENABLED value: "true" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: "my-strimzi-cluster-operator" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name For a description of the available environment variables, see Section 9.5.4.1, "Enabling leader election for Cluster Operator replicas" . If you specified a different name or namespace for the Lease resource used in leader election, update the RBAC resources. (optional) Edit the ClusterRole resource in the 022-ClusterRole-strimzi-cluster-operator-role.yaml file. Update resourceNames with the name of the Lease resource. Updating the ClusterRole references to the lease apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resourceNames: - my-strimzi-cluster-operator # ... (optional) Edit the RoleBinding resource in the 022-RoleBinding-strimzi-cluster-operator.yaml file. Update subjects.name and subjects.namespace with the name of the Lease resource and the namespace where it was created. Updating the RoleBinding references to the lease apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi subjects: - kind: ServiceAccount name: my-strimzi-cluster-operator namespace: myproject # ... Deploy the Cluster Operator: oc create -f install/cluster-operator -n myproject Check the status of the deployment: oc get deployments -n myproject Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows the correct number of replicas. 9.5.5. Configuring Cluster Operator HTTP proxy settings If you are running a Kafka cluster behind a HTTP proxy, you can still pass data in and out of the cluster. For example, you can run Kafka Connect with connectors that push and pull data from outside the proxy. Or you can use a proxy to connect with an authorization server. Configure the Cluster Operator deployment to specify the proxy environment variables. The Cluster Operator accepts standard proxy configuration ( HTTP_PROXY , HTTPS_PROXY and NO_PROXY ) as environment variables. The proxy settings are applied to all Streams for Apache Kafka containers. The format for a proxy address is http://<ip_address>:<port_number>. To set up a proxy with a name and password, the format is http://<username>:<password>@<ip-address>:<port_number>. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure To add proxy environment variables to the Cluster Operator, update its Deployment configuration ( install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml ). Example proxy configuration for the Cluster Operator apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: # ... - name: "HTTP_PROXY" value: "http://proxy.com" 1 - name: "HTTPS_PROXY" value: "https://proxy.com" 2 - name: "NO_PROXY" value: "internal.com, other.domain.com" 3 # ... 1 Address of the proxy server. 2 Secure address of the proxy server. 3 Addresses for servers that are accessed directly as exceptions to the proxy server. The URLs are comma-separated. Alternatively, edit the Deployment directly: oc edit deployment strimzi-cluster-operator If you updated the YAML file instead of editing the Deployment directly, apply the changes: oc create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml Additional resources Host aliases Designating Streams for Apache Kafka administrators 9.5.6. Disabling FIPS mode using Cluster Operator configuration Streams for Apache Kafka automatically switches to FIPS mode when running on a FIPS-enabled OpenShift cluster. Disable FIPS mode by setting the FIPS_MODE environment variable to disabled in the deployment configuration for the Cluster Operator. With FIPS mode disabled, Streams for Apache Kafka automatically disables FIPS in the OpenJDK for all components. With FIPS mode disabled, Streams for Apache Kafka is not FIPS compliant. The Streams for Apache Kafka operators, as well as all operands, run in the same way as if they were running on an OpenShift cluster without FIPS enabled. Procedure To disable the FIPS mode in the Cluster Operator, update its Deployment configuration ( install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml ) and add the FIPS_MODE environment variable. Example FIPS configuration for the Cluster Operator apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: # ... - name: "FIPS_MODE" value: "disabled" 1 # ... 1 Disables the FIPS mode. Alternatively, edit the Deployment directly: oc edit deployment strimzi-cluster-operator If you updated the YAML file instead of editing the Deployment directly, apply the changes: oc apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml 9.6. Configuring Kafka Connect Update the spec properties of the KafkaConnect custom resource to configure your Kafka Connect deployment. Use Kafka Connect to set up external data connections to your Kafka cluster. Use the properties of the KafkaConnect resource to configure your Kafka Connect deployment. For a deeper understanding of the Kafka Connect cluster configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . KafkaConnector configuration KafkaConnector resources allow you to create and manage connector instances for Kafka Connect in an OpenShift-native way. In your Kafka Connect configuration, you enable KafkaConnectors for a Kafka Connect cluster by adding the strimzi.io/use-connector-resources annotation. You can also add a build configuration so that Streams for Apache Kafka automatically builds a container image with the connector plugins you require for your data connections. External configuration for Kafka Connect connectors is specified through the externalConfiguration property. To manage connectors, you can use use KafkaConnector custom resources or the Kafka Connect REST API. KafkaConnector resources must be deployed to the same namespace as the Kafka Connect cluster they link to. For more information on using these methods to create, reconfigure, or delete connectors, see Adding connectors . Connector configuration is passed to Kafka Connect as part of an HTTP request and stored within Kafka itself. ConfigMaps and Secrets are standard OpenShift resources used for storing configurations and confidential data. You can use ConfigMaps and Secrets to configure certain elements of a connector. You can then reference the configuration values in HTTP REST commands, which keeps the configuration separate and more secure, if needed. This method applies especially to confidential data, such as usernames, passwords, or certificates. Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . Example KafkaConnect custom resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 2 spec: replicas: 3 3 authentication: 4 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source bootstrapServers: my-cluster-kafka-bootstrap:9092 5 tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt config: 7 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 build: 8 output: 9 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 10 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> externalConfiguration: 11 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey resources: 12 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 13 type: inline loggers: log4j.rootLogger: INFO readinessProbe: 14 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 15 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 16 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 17 rack: topologyKey: topology.kubernetes.io/zone 18 template: 19 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 20 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 21 1 Use KafkaConnect . 2 Enables KafkaConnectors for the Kafka Connect cluster. 3 The number of replica nodes for the workers that run tasks. 4 Authentication for the Kafka Connect cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. By default, Kafka Connect connects to Kafka brokers using a plain text connection. 5 Bootstrap server for connection to the Kafka cluster. 6 TLS encryption with key names under which TLS certificates are stored in X.509 format for the cluster. If certificates are stored in the same secret, it can be listed multiple times. 7 Kafka Connect configuration of workers (not connectors). Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by Streams for Apache Kafka. 8 Build configuration properties for building a container image with connector plugins automatically. 9 (Required) Configuration of the container registry where new images are pushed. 10 (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one artifact . 11 External configuration for connectors using environment variables, as shown here, or volumes. You can also use configuration provider plugins to load configuration values from external sources. 12 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 13 Specified Kafka Connect loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. For the Kafka Connect log4j.rootLogger logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 14 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 15 Prometheus metrics, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . 16 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka Connect. 17 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 18 SPECIALIZED OPTION: Rack awareness configuration for the deployment. This is a specialized option intended for a deployment within the same location, not across regions. Use this option if you want connectors to consume from the closest replica rather than the leader replica. In certain cases, consuming from the closest replica can improve network utilization or reduce costs . The topologyKey must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standard topology.kubernetes.io/zone label. To consume from the closest replica, enable the RackAwareReplicaSelector in the Kafka broker configuration. 19 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 20 Environment variables are set for distributed tracing. 21 Distributed tracing is enabled by using OpenTelemetry. 9.6.1. Configuring Kafka Connect for multiple instances By default, Streams for Apache Kafka configures the group ID and names of the internal topics used by Kafka Connect. When running multiple instances of Kafka Connect, you must change these default settings using the following config properties: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # ... # ... 1 The Kafka Connect cluster group ID within Kafka. 2 Kafka topic that stores connector offsets. 3 Kafka topic that stores connector and task status configurations. 4 Kafka topic that stores connector and task status updates. Note Values for the three topics must be the same for all instances with the same group.id . Unless you modify these default settings, each instance connecting to the same Kafka cluster is deployed with the same values. In practice, this means all instances form a cluster and use the same internal topics. Multiple instances attempting to use the same internal topics will cause unexpected errors, so you must change the values of these properties for each instance. 9.6.2. Configuring Kafka Connect user authorization When using authorization in Kafka, a Kafka Connect user requires read/write access to the cluster group and internal topics of Kafka Connect. This procedure outlines how access is granted using simple authorization and ACLs. Properties for the Kafka Connect cluster group ID and internal topics are configured by Streams for Apache Kafka by default. Alternatively, you can define them explicitly in the spec of the KafkaConnect resource. This is useful when configuring Kafka Connect for multiple instances , as the values for the group ID and topics must differ when running multiple Kafka Connect instances. Simple authorization uses ACL rules managed by the Kafka AclAuthorizer and StandardAuthorizer plugins to ensure appropriate access levels. For more information on configuring a KafkaUser resource to use simple authorization, see the AclRule schema reference . Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the authorization property in the KafkaUser resource to provide access rights to the user. Access rights are configured for the Kafka Connect topics and cluster group using literal name values. The following table shows the default names configured for the topics and cluster group ID. Table 9.2. Names for the access rights configuration Property Name offset.storage.topic connect-cluster-offsets status.storage.topic connect-cluster-status config.storage.topic connect-cluster-configs group connect-cluster In this example configuration, the default names are used to specify access rights. If you are using different names for a Kafka Connect instance, use those names in the ACLs configuration. Example configuration for simple authorization apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operations: - Create - Describe - Read - Write host: "*" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operations: - Create - Describe - Read - Write host: "*" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operations: - Create - Describe - Read - Write host: "*" # cluster group - resource: type: group name: connect-cluster patternType: literal operations: - Read host: "*" Create or update the resource. oc apply -f KAFKA-USER-CONFIG-FILE 9.6.3. Manually stopping or pausing Kafka Connect connectors If you are using KafkaConnector resources to configure connectors, use the state configuration to either stop or pause a connector. In contrast to the paused state, where the connector and tasks remain instantiated, stopping a connector retains only the configuration, with no active processes. Stopping a connector from running may be more suitable for longer durations than just pausing. While a paused connector is quicker to resume, a stopped connector has the advantages of freeing up memory and resources. Note The state configuration replaces the (deprecated) pause configuration in the KafkaConnectorSpec schema, which allows pauses on connectors. If you were previously using the pause configuration to pause connectors, we encourage you to transition to using the state configuration only to avoid conflicts. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the connector you want to pause or stop: oc get KafkaConnector Edit the KafkaConnector resource to stop or pause the connector. Example configuration for stopping a Kafka Connect connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector tasksMax: 2 config: file: "/opt/kafka/LICENSE" topic: my-topic state: stopped # ... Change the state configuration to stopped or paused . The default state for the connector when this property is not set is running . Apply the changes to the KafkaConnector configuration. You can resume the connector by changing state to running or removing the configuration. Note Alternatively, you can expose the Kafka Connect API and use the stop and pause endpoints to stop a connector from running. For example, PUT /connectors/<connector_name>/stop . You can then use the resume endpoint to restart it. 9.6.4. Manually restarting Kafka Connect connectors If you are using KafkaConnector resources to manage connectors, use the strimzi.io/restart annotation to manually trigger a restart of a connector. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the Kafka connector you want to restart: oc get KafkaConnector Restart the connector by annotating the KafkaConnector resource in OpenShift: oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart="true" The restart annotation is set to true . Wait for the reconciliation to occur (every two minutes by default). The Kafka connector is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the KafkaConnector custom resource. 9.6.5. Manually restarting Kafka Connect connector tasks If you are using KafkaConnector resources to manage connectors, use the strimzi.io/restart-task annotation to manually trigger a restart of a connector task. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the Kafka connector task you want to restart: oc get KafkaConnector Find the ID of the task to be restarted from the KafkaConnector custom resource: oc describe KafkaConnector <kafka_connector_name> Task IDs are non-negative integers, starting from 0. Use the ID to restart the connector task by annotating the KafkaConnector resource in OpenShift: oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task="0" In this example, task 0 is restarted. Wait for the reconciliation to occur (every two minutes by default). The Kafka connector task is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the KafkaConnector custom resource. 9.7. Configuring Kafka MirrorMaker 2 Update the spec properties of the KafkaMirrorMaker2 custom resource to configure your MirrorMaker 2 deployment. MirrorMaker 2 uses source cluster configuration for data consumption and target cluster configuration for data output. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. You configure MirrorMaker 2 to define the Kafka Connect deployment, including the connection details of the source and target clusters, and then run a set of MirrorMaker 2 connectors to make the connection. MirrorMaker 2 supports topic configuration synchronization between the source and target clusters. You specify source topics in the MirrorMaker 2 configuration. MirrorMaker 2 monitors the source topics. MirrorMaker 2 detects and propagates changes to the source topics to the remote topics. Changes might include automatically creating missing topics and partitions. Note In most cases you write to local topics and read from remote topics. Though write operations are not prevented on remote topics, they should be avoided. The configuration must specify: Each Kafka cluster Connection information for each cluster, including authentication The replication flow and direction Cluster to cluster Topic to topic For a deeper understanding of the Kafka MirrorMaker 2 cluster configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . Note MirrorMaker 2 resource configuration differs from the version of MirrorMaker, which is now deprecated. There is currently no legacy support, so any resources must be manually converted into the new format. Default configuration MirrorMaker 2 provides default configuration values for properties such as replication factors. A minimal configuration, with defaults left unchanged, would be something like this example: Minimal configuration for MirrorMaker 2 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 connectCluster: "my-cluster-target" clusters: - alias: "my-cluster-source" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: "my-cluster-target" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: {} You can configure access control for source and target clusters using mTLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and mTLS authentication for the source and target cluster. You can specify the topics and consumer groups you wish to replicate from a source cluster in the KafkaMirrorMaker2 resource. You use the topicsPattern and groupsPattern properties to do this. You can provide a list of names or use a regular expression. By default, all topics and consumer groups are replicated if you do not set the topicsPattern and groupsPattern properties. You can also replicate all topics and consumer groups by using ".*" as a regular expression. However, try to specify only the topics and consumer groups you need to avoid causing any unnecessary extra load on the cluster. Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . Example KafkaMirrorMaker2 custom resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 1 replicas: 3 2 connectCluster: "my-cluster-target" 3 clusters: 4 - alias: "my-cluster-source" 5 authentication: 6 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 7 tls: 8 trustedCertificates: - certificate: ca.crt secretName: my-cluster-source-cluster-ca-cert - alias: "my-cluster-target" 9 authentication: 10 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 11 config: 12 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 tls: 13 trustedCertificates: - certificate: ca.crt secretName: my-cluster-target-cluster-ca-cert mirrors: 14 - sourceCluster: "my-cluster-source" 15 targetCluster: "my-cluster-target" 16 sourceConnector: 17 tasksMax: 10 18 autoRestart: 19 enabled: true config replication.factor: 1 20 offset-syncs.topic.replication.factor: 1 21 sync.topic.acls.enabled: "false" 22 refresh.topics.interval.seconds: 60 23 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" 24 heartbeatConnector: 25 autoRestart: enabled: true config: heartbeats.topic.replication.factor: 1 26 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" checkpointConnector: 27 autoRestart: enabled: true config: checkpoints.topic.replication.factor: 1 28 refresh.groups.interval.seconds: 600 29 sync.group.offsets.enabled: true 30 sync.group.offsets.interval.seconds: 60 31 emit.checkpoints.interval.seconds: 60 32 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" topicsPattern: "topic1|topic2|topic3" 33 groupsPattern: "group1|group2|group3" 34 resources: 35 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 36 type: inline loggers: connect.root.logger.level: INFO readinessProbe: 37 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 38 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 39 rack: topologyKey: topology.kubernetes.io/zone 40 template: 41 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 42 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 43 externalConfiguration: 44 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey 1 The Kafka Connect and MirrorMaker 2 version, which will always be the same. 2 The number of replica nodes for the workers that run tasks. 3 Kafka cluster alias for Kafka Connect, which must specify the target Kafka cluster. The Kafka cluster is used by Kafka Connect for its internal topics. 4 Specification for the Kafka clusters being synchronized. 5 Cluster alias for the source Kafka cluster. 6 Authentication for the source cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. 7 Bootstrap server for connection to the source Kafka cluster. 8 TLS encryption with key names under which TLS certificates are stored in X.509 format for the source Kafka cluster. If certificates are stored in the same secret, it can be listed multiple times. 9 Cluster alias for the target Kafka cluster. 10 Authentication for the target Kafka cluster is configured in the same way as for the source Kafka cluster. 11 Bootstrap server for connection to the target Kafka cluster. 12 Kafka Connect configuration. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by Streams for Apache Kafka. 13 TLS encryption for the target Kafka cluster is configured in the same way as for the source Kafka cluster. 14 MirrorMaker 2 connectors. 15 Cluster alias for the source cluster used by the MirrorMaker 2 connectors. 16 Cluster alias for the target cluster used by the MirrorMaker 2 connectors. 17 Configuration for the MirrorSourceConnector that creates remote topics. The config overrides the default configuration options. 18 The maximum number of tasks that the connector may create. Tasks handle the data replication and run in parallel. If the infrastructure supports the processing overhead, increasing this value can improve throughput. Kafka Connect distributes the tasks between members of the cluster. If there are more tasks than workers, workers are assigned multiple tasks. For sink connectors, aim to have one task for each topic partition consumed. For source connectors, the number of tasks that can run in parallel may also depend on the external system. The connector creates fewer than the maximum number of tasks if it cannot achieve the parallelism. 19 Enables automatic restarts of failed connectors and tasks. By default, the number of restarts is indefinite, but you can set a maximum on the number of automatic restarts using the maxRestarts property. 20 Replication factor for mirrored topics created at the target cluster. 21 Replication factor for the MirrorSourceConnector offset-syncs internal topic that maps the offsets of the source and target clusters. 22 When ACL rules synchronization is enabled, ACLs are applied to synchronized topics. The default is true . This feature is not compatible with the User Operator. If you are using the User Operator, set this property to false . 23 Optional setting to change the frequency of checks for new topics. The default is for a check every 10 minutes. 24 Adds a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is useful for active/passive backups and data migration. The property must be specified for all connectors. For bidirectional (active/active) replication, use the DefaultReplicationPolicy class to automatically rename remote topics and specify the replication.policy.separator property for all connectors to add a custom separator. 25 Configuration for the MirrorHeartbeatConnector that performs connectivity checks. The config overrides the default configuration options. 26 Replication factor for the heartbeat topic created at the target cluster. 27 Configuration for the MirrorCheckpointConnector that tracks offsets. The config overrides the default configuration options. 28 Replication factor for the checkpoints topic created at the target cluster. 29 Optional setting to change the frequency of checks for new consumer groups. The default is for a check every 10 minutes. 30 Optional setting to synchronize consumer group offsets, which is useful for recovery in an active/passive configuration. Synchronization is not enabled by default. 31 If the synchronization of consumer group offsets is enabled, you can adjust the frequency of the synchronization. 32 Adjusts the frequency of checks for offset tracking. If you change the frequency of offset synchronization, you might also need to adjust the frequency of these checks. 33 Topic replication from the source cluster defined as a comma-separated list or regular expression pattern. The source connector replicates the specified topics. The checkpoint connector tracks offsets for the specified topics. Here we request three topics by name. 34 Consumer group replication from the source cluster defined as a comma-separated list or regular expression pattern. The checkpoint connector replicates the specified consumer groups. Here we request three consumer groups by name. 35 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 36 Specified Kafka Connect loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. For the Kafka Connect log4j.rootLogger logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 37 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 38 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker. 39 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 40 SPECIALIZED OPTION: Rack awareness configuration for the deployment. This is a specialized option intended for a deployment within the same location, not across regions. Use this option if you want connectors to consume from the closest replica rather than the leader replica. In certain cases, consuming from the closest replica can improve network utilization or reduce costs . The topologyKey must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standard topology.kubernetes.io/zone label. To consume from the closest replica, enable the RackAwareReplicaSelector in the Kafka broker configuration. 41 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 42 Environment variables are set for distributed tracing. 43 Distributed tracing is enabled by using OpenTelemetry. 44 External configuration for an OpenShift Secret mounted to Kafka MirrorMaker as an environment variable. You can also use configuration provider plugins to load configuration values from external sources. 9.7.1. Configuring active/active or active/passive modes You can use MirrorMaker 2 in active/passive or active/active cluster configurations. active/active cluster configuration An active/active configuration has two active clusters replicating data bidirectionally. Applications can use either cluster. Each cluster can provide the same data. In this way, you can make the same data available in different geographical locations. As consumer groups are active in both clusters, consumer offsets for replicated topics are not synchronized back to the source cluster. active/passive cluster configuration An active/passive configuration has an active cluster replicating data to a passive cluster. The passive cluster remains on standby. You might use the passive cluster for data recovery in the event of system failure. The expectation is that producers and consumers connect to active clusters only. A MirrorMaker 2 cluster is required at each target destination. 9.7.1.1. Bidirectional replication (active/active) The MirrorMaker 2 architecture supports bidirectional replication in an active/active cluster configuration. Each cluster replicates the data of the other cluster using the concept of source and remote topics. As the same topics are stored in each cluster, remote topics are automatically renamed by MirrorMaker 2 to represent the source cluster. The name of the originating cluster is prepended to the name of the topic. Figure 9.1. Topic renaming By flagging the originating cluster, topics are not replicated back to that cluster. The concept of replication through remote topics is useful when configuring an architecture that requires data aggregation. Consumers can subscribe to source and remote topics within the same cluster, without the need for a separate aggregation cluster. 9.7.1.2. Unidirectional replication (active/passive) The MirrorMaker 2 architecture supports unidirectional replication in an active/passive cluster configuration. You can use an active/passive cluster configuration to make backups or migrate data to another cluster. In this situation, you might not want automatic renaming of remote topics. You can override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration. With this configuration applied, topics retain their original names. 9.7.2. Configuring MirrorMaker 2 for multiple instances By default, Streams for Apache Kafka configures the group ID and names of the internal topics used by the Kafka Connect framework that MirrorMaker 2 runs on. When running multiple instances of MirrorMaker 2, and they share the same connectCluster value, you must change these default settings using the following config properties: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: connectCluster: "my-cluster-target" clusters: - alias: "my-cluster-target" config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # ... # ... 1 The Kafka Connect cluster group ID within Kafka. 2 Kafka topic that stores connector offsets. 3 Kafka topic that stores connector and task status configurations. 4 Kafka topic that stores connector and task status updates. Note Values for the three topics must be the same for all instances with the same group.id . The connectCluster setting specifies the alias of the target Kafka cluster used by Kafka Connect for its internal topics. As a result, modifications to the connectCluster , group ID, and internal topic naming configuration are specific to the target Kafka cluster. You don't need to make changes if two MirrorMaker 2 instances are using the same source Kafka cluster or in an active-active mode where each MirrorMaker 2 instance has a different connectCluster setting and target cluster. However, if multiple MirrorMaker 2 instances share the same connectCluster , each instance connecting to the same target Kafka cluster is deployed with the same values. In practice, this means all instances form a cluster and use the same internal topics. Multiple instances attempting to use the same internal topics will cause unexpected errors, so you must change the values of these properties for each instance. 9.7.3. Configuring MirrorMaker 2 connectors Use MirrorMaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters. MirrorMaker 2 consists of the following connectors: MirrorSourceConnector The source connector replicates topics from a source cluster to a target cluster. It also replicates ACLs and is necessary for the MirrorCheckpointConnector to run. MirrorCheckpointConnector The checkpoint connector periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the source and target cluster. MirrorHeartbeatConnector The heartbeat connector periodically checks connectivity between the source and target cluster. The following table describes connector properties and the connectors you configure to use them. Table 9.3. MirrorMaker 2 connector configuration properties Property sourceConnector checkpointConnector heartbeatConnector admin.timeout.ms Timeout for admin tasks, such as detecting new topics. Default is 60000 (1 minute). [✓] [✓] [✓] replication.policy.class Policy to define the remote topic naming convention. Default is org.apache.kafka.connect.mirror.DefaultReplicationPolicy . [✓] [✓] [✓] replication.policy.separator The separator used for topic naming in the target cluster. By default, the separator is set to a dot (.). Separator configuration is only applicable to the DefaultReplicationPolicy replication policy class, which defines remote topic names. The IdentityReplicationPolicy class does not use the property as topics retain their original names. [✓] [✓] [✓] consumer.poll.timeout.ms Timeout when polling the source cluster. Default is 1000 (1 second). [✓] [✓] offset-syncs.topic.location The location of the offset-syncs topic, which can be the source (default) or target cluster. [✓] [✓] topic.filter.class Topic filter to select the topics to replicate. Default is org.apache.kafka.connect.mirror.DefaultTopicFilter . [✓] [✓] config.property.filter.class Topic filter to select the topic configuration properties to replicate. Default is org.apache.kafka.connect.mirror.DefaultConfigPropertyFilter . [✓] config.properties.exclude Topic configuration properties that should not be replicated. Supports comma-separated property names and regular expressions. [✓] offset.lag.max Maximum allowable (out-of-sync) offset lag before a remote partition is synchronized. Default is 100 . [✓] offset-syncs.topic.replication.factor Replication factor for the internal offset-syncs topic. Default is 3 . [✓] refresh.topics.enabled Enables check for new topics and partitions. Default is true . [✓] refresh.topics.interval.seconds Frequency of topic refresh. Default is 600 (10 minutes). By default, a check for new topics in the source cluster is made every 10 minutes. You can change the frequency by adding refresh.topics.interval.seconds to the source connector configuration. [✓] replication.factor The replication factor for new topics. Default is 2 . [✓] sync.topic.acls.enabled Enables synchronization of ACLs from the source cluster. Default is true . For more information, see Section 9.7.6, "Synchronizing ACL rules for remote topics" . [✓] sync.topic.acls.interval.seconds Frequency of ACL synchronization. Default is 600 (10 minutes). [✓] sync.topic.configs.enabled Enables synchronization of topic configuration from the source cluster. Default is true . [✓] sync.topic.configs.interval.seconds Frequency of topic configuration synchronization. Default 600 (10 minutes). [✓] checkpoints.topic.replication.factor Replication factor for the internal checkpoints topic. Default is 3 . [✓] emit.checkpoints.enabled Enables synchronization of consumer offsets to the target cluster. Default is true . [✓] emit.checkpoints.interval.seconds Frequency of consumer offset synchronization. Default is 60 (1 minute). [✓] group.filter.class Group filter to select the consumer groups to replicate. Default is org.apache.kafka.connect.mirror.DefaultGroupFilter . [✓] refresh.groups.enabled Enables check for new consumer groups. Default is true . [✓] refresh.groups.interval.seconds Frequency of consumer group refresh. Default is 600 (10 minutes). [✓] sync.group.offsets.enabled Enables synchronization of consumer group offsets to the target cluster __consumer_offsets topic. Default is false . [✓] sync.group.offsets.interval.seconds Frequency of consumer group offset synchronization. Default is 60 (1 minute). [✓] emit.heartbeats.enabled Enables connectivity checks on the target cluster. Default is true . [✓] emit.heartbeats.interval.seconds Frequency of connectivity checks. Default is 1 (1 second). [✓] heartbeats.topic.replication.factor Replication factor for the internal heartbeats topic. Default is 3 . [✓] 9.7.3.1. Changing the location of the consumer group offsets topic MirrorMaker 2 tracks offsets for consumer groups using internal topics. offset-syncs topic The offset-syncs topic maps the source and target offsets for replicated topic partitions from record metadata. checkpoints topic The checkpoints topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group. As they are used internally by MirrorMaker 2, you do not interact directly with these topics. MirrorCheckpointConnector emits checkpoints for offset tracking. Offsets for the checkpoints topic are tracked at predetermined intervals through configuration. Both topics enable replication to be fully restored from the correct offset position on failover. The location of the offset-syncs topic is the source cluster by default. You can use the offset-syncs.topic.location connector configuration to change this to the target cluster. You need read/write access to the cluster that contains the topic. Using the target cluster as the location of the offset-syncs topic allows you to use MirrorMaker 2 even if you have only read access to the source cluster. 9.7.3.2. Synchronizing consumer group offsets The __consumer_offsets topic stores information on committed offsets for each consumer group. Offset synchronization periodically transfers the consumer offsets for the consumer groups of a source cluster into the consumer offsets topic of a target cluster. Offset synchronization is particularly useful in an active/passive configuration. If the active cluster goes down, consumer applications can switch to the passive (standby) cluster and pick up from the last transferred offset position. To use topic offset synchronization, enable the synchronization by adding sync.group.offsets.enabled to the checkpoint connector configuration, and setting the property to true . Synchronization is disabled by default. When using the IdentityReplicationPolicy in the source connector, it also has to be configured in the checkpoint connector configuration. This ensures that the mirrored consumer offsets will be applied for the correct topics. Consumer offsets are only synchronized for consumer groups that are not active in the target cluster. If the consumer groups are in the target cluster, the synchronization cannot be performed and an UNKNOWN_MEMBER_ID error is returned. If enabled, the synchronization of offsets from the source cluster is made periodically. You can change the frequency by adding sync.group.offsets.interval.seconds and emit.checkpoints.interval.seconds to the checkpoint connector configuration. The properties specify the frequency in seconds that the consumer group offsets are synchronized, and the frequency of checkpoints emitted for offset tracking. The default for both properties is 60 seconds. You can also change the frequency of checks for new consumer groups using the refresh.groups.interval.seconds property, which is performed every 10 minutes by default. Because the synchronization is time-based, any switchover by consumers to a passive cluster will likely result in some duplication of messages. Note If you have an application written in Java, you can use the RemoteClusterUtils.java utility to synchronize offsets through the application. The utility fetches remote offsets for a consumer group from the checkpoints topic. 9.7.3.3. Deciding when to use the heartbeat connector The heartbeat connector emits heartbeats to check connectivity between source and target Kafka clusters. An internal heartbeat topic is replicated from the source cluster, which means that the heartbeat connector must be connected to the source cluster. The heartbeat topic is located on the target cluster, which allows it to do the following: Identify all source clusters it is mirroring data from Verify the liveness and latency of the mirroring process This helps to make sure that the process is not stuck or has stopped for any reason. While the heartbeat connector can be a valuable tool for monitoring the mirroring processes between Kafka clusters, it's not always necessary to use it. For example, if your deployment has low network latency or a small number of topics, you might prefer to monitor the mirroring process using log messages or other monitoring tools. If you decide not to use the heartbeat connector, simply omit it from your MirrorMaker 2 configuration. 9.7.3.4. Aligning the configuration of MirrorMaker 2 connectors To ensure that MirrorMaker 2 connectors work properly, make sure to align certain configuration settings across connectors. Specifically, ensure that the following properties have the same value across all applicable connectors: replication.policy.class replication.policy.separator offset-syncs.topic.location topic.filter.class For example, the value for replication.policy.class must be the same for the source, checkpoint, and heartbeat connectors. Mismatched or missing settings cause issues with data replication or offset syncing, so it's essential to keep all relevant connectors configured with the same settings. 9.7.4. Configuring MirrorMaker 2 connector producers and consumers MirrorMaker 2 connectors use internal producers and consumers. If needed, you can configure these producers and consumers to override the default settings. For example, you can increase the batch.size for the source producer that sends topics to the target Kafka cluster to better accommodate large volumes of messages. Important Producer and consumer configuration options depend on the MirrorMaker 2 implementation, and may be subject to change. The following tables describe the producers and consumers for each of the connectors and where you can add configuration. Table 9.4. Source connector producers and consumers Type Description Configuration Producer Sends topic messages to the target Kafka cluster. Consider tuning the configuration of this producer when it is handling large volumes of data. mirrors.sourceConnector.config: producer.override.* Producer Writes to the offset-syncs topic, which maps the source and target offsets for replicated topic partitions. mirrors.sourceConnector.config: producer.* Consumer Retrieves topic messages from the source Kafka cluster. mirrors.sourceConnector.config: consumer.* Table 9.5. Checkpoint connector producers and consumers Type Description Configuration Producer Emits consumer offset checkpoints. mirrors.checkpointConnector.config: producer.override.* Consumer Loads the offset-syncs topic. mirrors.checkpointConnector.config: consumer.* Note You can set offset-syncs.topic.location to target to use the target Kafka cluster as the location of the offset-syncs topic. Table 9.6. Heartbeat connector producer Type Description Configuration Producer Emits heartbeats. mirrors.heartbeatConnector.config: producer.override.* The following example shows how you configure the producers and consumers. Example configuration for connector producers and consumers apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 5 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 producer.request.timeout.ms: 30000 consumer.fetch.max.bytes: 52428800 # ... checkpointConnector: config: producer.override.request.timeout.ms: 30000 consumer.max.poll.interval.ms: 300000 # ... heartbeatConnector: config: producer.override.request.timeout.ms: 30000 # ... 9.7.5. Specifying a maximum number of data replication tasks Connectors create the tasks that are responsible for moving data in and out of Kafka. Each connector comprises one or more tasks that are distributed across a group of worker pods that run the tasks. Increasing the number of tasks can help with performance issues when replicating a large number of partitions or synchronizing the offsets of a large number of consumer groups. Tasks run in parallel. Workers are assigned one or more tasks. A single task is handled by one worker pod, so you don't need more worker pods than tasks. If there are more tasks than workers, workers handle multiple tasks. You can specify the maximum number of connector tasks in your MirrorMaker configuration using the tasksMax property. Without specifying a maximum number of tasks, the default setting is a single task. The heartbeat connector always uses a single task. The number of tasks that are started for the source and checkpoint connectors is the lower value between the maximum number of possible tasks and the value for tasksMax . For the source connector, the maximum number of tasks possible is one for each partition being replicated from the source cluster. For the checkpoint connector, the maximum number of tasks possible is one for each consumer group being replicated from the source cluster. When setting a maximum number of tasks, consider the number of partitions and the hardware resources that support the process. If the infrastructure supports the processing overhead, increasing the number of tasks can improve throughput and latency. For example, adding more tasks reduces the time taken to poll the source cluster when there is a high number of partitions or consumer groups. Increasing the number of tasks for the source connector is useful when you have a large number of partitions. Increasing the number of tasks for the source connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 10 # ... Increasing the number of tasks for the checkpoint connector is useful when you have a large number of consumer groups. Increasing the number of tasks for the checkpoint connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" checkpointConnector: tasksMax: 10 # ... By default, MirrorMaker 2 checks for new consumer groups every 10 minutes. You can adjust the refresh.groups.interval.seconds configuration to change the frequency. Take care when adjusting lower. More frequent checks can have a negative impact on performance. 9.7.5.1. Checking connector task operations If you are using Prometheus and Grafana to monitor your deployment, you can check MirrorMaker 2 performance. The example MirrorMaker 2 Grafana dashboard provided with Streams for Apache Kafka shows the following metrics related to tasks and latency. The number of tasks Replication latency Offset synchronization latency Additional resources Chapter 21, Setting up metrics and dashboards for Streams for Apache Kafka 9.7.6. Synchronizing ACL rules for remote topics When using MirrorMaker 2 with Streams for Apache Kafka, it is possible to synchronize ACL rules for remote topics. However, this feature is only available if you are not using the User Operator. If you are using type: simple authorization without the User Operator, the ACL rules that manage access to brokers also apply to remote topics. This means that users who have read access to a source topic can also read its remote equivalent. Note OAuth 2.0 authorization does not support access to remote topics in this way. 9.7.7. Securing a Kafka MirrorMaker 2 deployment This procedure describes in outline the configuration required to secure a MirrorMaker 2 deployment. You need separate configuration for the source Kafka cluster and the target Kafka cluster. You also need separate user configuration to provide the credentials required for MirrorMaker to connect to the source and target Kafka clusters. For the Kafka clusters, you specify internal listeners for secure connections within an OpenShift cluster and external listeners for connections outside the OpenShift cluster. You can configure authentication and authorization mechanisms. The security options implemented for the source and target Kafka clusters must be compatible with the security options implemented for MirrorMaker 2. After you have created the cluster and user authentication credentials, you specify them in your MirrorMaker configuration for secure connections. Note In this procedure, the certificates generated by the Cluster Operator are used, but you can replace them by installing your own certificates . You can also configure your listener to use a Kafka listener certificate managed by an external CA (certificate authority) . Before you start Before starting this procedure, take a look at the example configuration files provided by Streams for Apache Kafka. They include examples for securing a deployment of MirrorMaker 2 using mTLS or SCRAM-SHA-512 authentication. The examples specify internal listeners for connecting within an OpenShift cluster. The examples also provide the configuration for full authorization, including the ACLs that allow user operations on the source and target Kafka clusters. When configuring user access to source and target Kafka clusters, ACLs must grant access rights to internal MirrorMaker 2 connectors and read/write access to the cluster group and internal topics used by the underlying Kafka Connect framework in the target cluster. If you've renamed the cluster group or internal topics, such as when configuring MirrorMaker 2 for multiple instances , use those names in the ACLs configuration. Simple authorization uses ACL rules managed by the Kafka AclAuthorizer and StandardAuthorizer plugins to ensure appropriate access levels. For more information on configuring a KafkaUser resource to use simple authorization, see the AclRule schema reference . Prerequisites Streams for Apache Kafka is running Separate namespaces for source and target clusters The procedure assumes that the source and target Kafka clusters are installed to separate namespaces. If you want to use the Topic Operator, you'll need to do this. The Topic Operator only watches a single cluster in a specified namespace. By separating the clusters into namespaces, you will need to copy the cluster secrets so they can be accessed outside the namespace. You need to reference the secrets in the MirrorMaker configuration. Procedure Configure two Kafka resources, one to secure the source Kafka cluster and one to secure the target Kafka cluster. You can add listener configuration for authentication and enable authorization. In this example, an internal listener is configured for a Kafka cluster with TLS encryption and mTLS authentication. Kafka simple authorization is enabled. Example source Kafka cluster configuration with TLS encryption and mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-source-cluster spec: kafka: version: 3.7.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: "3.7" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} Example target Kafka cluster configuration with TLS encryption and mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-target-cluster spec: kafka: version: 3.7.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: "3.7" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} Create or update the Kafka resources in separate namespaces. oc apply -f <kafka_configuration_file> -n <namespace> The Cluster Operator creates the listeners and sets up the cluster and client certificate authority (CA) certificates to enable authentication within the Kafka cluster. The certificates are created in the secret <cluster_name> -cluster-ca-cert . Configure two KafkaUser resources, one for a user of the source Kafka cluster and one for a user of the target Kafka cluster. Configure the same authentication and authorization types as the corresponding source and target Kafka cluster. For example, if you used tls authentication and the simple authorization type in the Kafka configuration for the source Kafka cluster, use the same in the KafkaUser configuration. Configure the ACLs needed by MirrorMaker 2 to allow operations on the source and target Kafka clusters. Example source user configuration for mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-source-user labels: strimzi.io/cluster: my-source-cluster spec: authentication: type: tls authorization: type: simple acls: # MirrorSourceConnector - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Create - DescribeConfigs - Read - Write - resource: # Needed for every topic which is mirrored type: topic name: "*" operations: - DescribeConfigs - Read # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: # Needed for every group for which offsets are synced type: group name: "*" operations: - Describe - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Read Example target user configuration for mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-target-user labels: strimzi.io/cluster: my-target-cluster spec: authentication: type: tls authorization: type: simple acls: # cluster group - resource: type: group name: mirrormaker2-cluster operations: - Read # access to config.storage.topic - resource: type: topic name: mirrormaker2-cluster-configs operations: - Create - Describe - DescribeConfigs - Read - Write # access to status.storage.topic - resource: type: topic name: mirrormaker2-cluster-status operations: - Create - Describe - DescribeConfigs - Read - Write # access to offset.storage.topic - resource: type: topic name: mirrormaker2-cluster-offsets operations: - Create - Describe - DescribeConfigs - Read - Write # MirrorSourceConnector - resource: # Needed for every topic which is mirrored type: topic name: "*" operations: - Create - Alter - AlterConfigs - Write # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: type: topic name: my-source-cluster.checkpoints.internal operations: - Create - Describe - Read - Write - resource: # Needed for every group for which the offset is synced type: group name: "*" operations: - Read - Describe # MirrorHeartbeatConnector - resource: type: topic name: heartbeats operations: - Create - Describe - Write Note You can use a certificate issued outside the User Operator by setting type to tls-external . For more information, see the KafkaUserSpec schema reference . Create or update a KafkaUser resource in each of the namespaces you created for the source and target Kafka clusters. oc apply -f <kafka_user_configuration_file> -n <namespace> The User Operator creates the users representing the client (MirrorMaker), and the security credentials used for client authentication, based on the chosen authentication type. The User Operator creates a new secret with the same name as the KafkaUser resource. The secret contains a private and public key for mTLS authentication. The public key is contained in a user certificate, which is signed by the clients CA. Configure a KafkaMirrorMaker2 resource with the authentication details to connect to the source and target Kafka clusters. Example MirrorMaker 2 configuration with TLS encryption and mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker-2 spec: version: 3.7.0 replicas: 1 connectCluster: "my-target-cluster" clusters: - alias: "my-source-cluster" bootstrapServers: my-source-cluster-kafka-bootstrap:9093 tls: 1 trustedCertificates: - secretName: my-source-cluster-cluster-ca-cert certificate: ca.crt authentication: 2 type: tls certificateAndKey: secretName: my-source-user certificate: user.crt key: user.key - alias: "my-target-cluster" bootstrapServers: my-target-cluster-kafka-bootstrap:9093 tls: 3 trustedCertificates: - secretName: my-target-cluster-cluster-ca-cert certificate: ca.crt authentication: 4 type: tls certificateAndKey: secretName: my-target-user certificate: user.crt key: user.key config: # -1 means it will use the default replication factor configured in the broker config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 mirrors: - sourceCluster: "my-source-cluster" targetCluster: "my-target-cluster" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: "false" heartbeatConnector: config: heartbeats.topic.replication.factor: 1 checkpointConnector: config: checkpoints.topic.replication.factor: 1 sync.group.offsets.enabled: "true" topicsPattern: "topic1|topic2|topic3" groupsPattern: "group1|group2|group3" 1 The TLS certificates for the source Kafka cluster. If they are in a separate namespace, copy the cluster secrets from the namespace of the Kafka cluster. 2 The user authentication for accessing the source Kafka cluster using the TLS mechanism. 3 The TLS certificates for the target Kafka cluster. 4 The user authentication for accessing the target Kafka cluster. Create or update the KafkaMirrorMaker2 resource in the same namespace as the target Kafka cluster. oc apply -f <mirrormaker2_configuration_file> -n <namespace_of_target_cluster> 9.7.8. Manually stopping or pausing MirrorMaker 2 connectors If you are using KafkaMirrorMaker2 resources to configure internal MirrorMaker connectors, use the state configuration to either stop or pause a connector. In contrast to the paused state, where the connector and tasks remain instantiated, stopping a connector retains only the configuration, with no active processes. Stopping a connector from running may be more suitable for longer durations than just pausing. While a paused connector is quicker to resume, a stopped connector has the advantages of freeing up memory and resources. Note The state configuration replaces the (deprecated) pause configuration in the KafkaMirrorMaker2ConnectorSpec schema, which allows pauses on connectors. If you were previously using the pause configuration to pause connectors, we encourage you to transition to using the state configuration only to avoid conflicts. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaMirrorMaker2 custom resource that controls the MirrorMaker 2 connector you want to pause or stop: oc get KafkaMirrorMaker2 Edit the KafkaMirrorMaker2 resource to stop or pause the connector. Example configuration for stopping a MirrorMaker 2 connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 replicas: 3 connectCluster: "my-cluster-target" clusters: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 10 autoRestart: enabled: true state: stopped # ... Change the state configuration to stopped or paused . The default state for the connector when this property is not set is running . Apply the changes to the KafkaMirrorMaker2 configuration. You can resume the connector by changing state to running or removing the configuration. Note Alternatively, you can expose the Kafka Connect API and use the stop and pause endpoints to stop a connector from running. For example, PUT /connectors/<connector_name>/stop . You can then use the resume endpoint to restart it. 9.7.9. Manually restarting MirrorMaker 2 connectors Use the strimzi.io/restart-connector annotation to manually trigger a restart of a MirrorMaker 2 connector. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaMirrorMaker2 custom resource that controls the Kafka MirrorMaker 2 connector you want to restart: oc get KafkaMirrorMaker2 Find the name of the Kafka MirrorMaker 2 connector to be restarted from the KafkaMirrorMaker2 custom resource: oc describe KafkaMirrorMaker2 <mirrormaker_cluster_name> Use the name of the connector to restart the connector by annotating the KafkaMirrorMaker2 resource in OpenShift: oc annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> "strimzi.io/restart-connector=<mirrormaker_connector_name>" In this example, connector my-connector in the my-mirror-maker-2 cluster is restarted: oc annotate KafkaMirrorMaker2 my-mirror-maker-2 "strimzi.io/restart-connector=my-connector" Wait for the reconciliation to occur (every two minutes by default). The MirrorMaker 2 connector is restarted, as long as the annotation was detected by the reconciliation process. When MirrorMaker 2 accepts the request, the annotation is removed from the KafkaMirrorMaker2 custom resource. 9.7.10. Manually restarting MirrorMaker 2 connector tasks Use the strimzi.io/restart-connector-task annotation to manually trigger a restart of a MirrorMaker 2 connector. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaMirrorMaker2 custom resource that controls the MirrorMaker 2 connector task you want to restart: oc get KafkaMirrorMaker2 Find the name of the connector and the ID of the task to be restarted from the KafkaMirrorMaker2 custom resource: oc describe KafkaMirrorMaker2 <mirrormaker_cluster_name> Task IDs are non-negative integers, starting from 0. Use the name and ID to restart the connector task by annotating the KafkaMirrorMaker2 resource in OpenShift: oc annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> "strimzi.io/restart-connector-task=<mirrormaker_connector_name>:<task_id>" In this example, task 0 for connector my-connector in the my-mirror-maker-2 cluster is restarted: oc annotate KafkaMirrorMaker2 my-mirror-maker-2 "strimzi.io/restart-connector-task=my-connector:0" Wait for the reconciliation to occur (every two minutes by default). The MirrorMaker 2 connector task is restarted, as long as the annotation was detected by the reconciliation process. When MirrorMaker 2 accepts the request, the annotation is removed from the KafkaMirrorMaker2 custom resource. 9.8. Configuring Kafka MirrorMaker (deprecated) Update the spec properties of the KafkaMirrorMaker custom resource to configure your Kafka MirrorMaker deployment. You can configure access control for producers and consumers using TLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and mTLS authentication on the consumer and producer side. For a deeper understanding of the Kafka MirrorMaker cluster configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . Important Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in Streams for Apache Kafka as well. The KafkaMirrorMaker resource will be removed from Streams for Apache Kafka when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy . Example KafkaMirrorMaker custom resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: "my-group" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert certificate: ca.crt authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 9 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert certificate: ca.crt authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 include: "my-topic|other-topic" 10 resources: 11 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 12 type: inline loggers: mirrormaker.root.logger: INFO readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 15 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 16 template: 17 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" mirrorMakerContainer: 18 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: 19 type: opentelemetry 1 The number of replica nodes. 2 Bootstrap servers for consumer and producer. 3 Group ID for the consumer. 4 The number of consumer streams. 5 The offset auto-commit interval in milliseconds. 6 TLS encryption with key names under which TLS certificates are stored in X.509 format for consumer or producer. If certificates are stored in the same secret, it can be listed multiple times. 7 Authentication for consumer or producer, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. 8 Kafka configuration options for consumer and producer. 9 If the abortOnSendFailure property is set to true , Kafka MirrorMaker will exit and the container will restart following a send failure for a message. 10 A list of included topics mirrored from source to target Kafka cluster. 11 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 12 Specified loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. MirrorMaker has a single logger called mirrormaker.root.logger . You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 13 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 14 Prometheus metrics, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . 15 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker. 16 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 17 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 18 Environment variables are set for distributed tracing. 19 Distributed tracing is enabled by using OpenTelemetry. Warning With the abortOnSendFailure property set to false , the producer attempts to send the message in a topic. The original message might be lost, as there is no attempt to resend a failed message. 9.9. Configuring the Kafka Bridge Update the spec properties of the KafkaBridge custom resource to configure your Kafka Bridge deployment. In order to prevent issues arising when client consumer requests are processed by different Kafka Bridge instances, address-based routing must be employed to ensure that requests are routed to the right Kafka Bridge instance. Additionally, each independent Kafka Bridge instance must have a replica. A Kafka Bridge instance has its own state which is not shared with another instances. For a deeper understanding of the Kafka Bridge cluster configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . Example KafkaBridge custom resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: replicas: 3 1 bootstrapServers: <cluster_name> -cluster-kafka-bootstrap:9092 2 tls: 3 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt authentication: 4 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key http: 5 port: 8080 cors: 6 allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" consumer: 7 config: auto.offset.reset: earliest producer: 8 config: delivery.timeout.ms: 300000 resources: 9 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 10 type: inline loggers: logger.bridge.level: INFO # enabling DEBUG just for send operation logger.send.name: "http.openapi.operation.send" logger.send.level: DEBUG jvmOptions: 11 "-Xmx": "1g" "-Xms": "1g" readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 image: my-org/my-image:latest 13 template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" bridgeContainer: 15 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 16 1 The number of replica nodes. 2 Bootstrap server for connection to the target Kafka cluster. Use the name of the Kafka cluster as the <cluster_name> . 3 TLS encryption with key names under which TLS certificates are stored in X.509 format for the source Kafka cluster. If certificates are stored in the same secret, it can be listed multiple times. 4 Authentication for the Kafka Bridge cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. By default, the Kafka Bridge connects to Kafka brokers without authentication. 5 HTTP access to Kafka brokers. 6 CORS access specifying selected resources and access methods. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster. 7 Consumer configuration options. 8 Producer configuration options. 9 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 10 Specified Kafka Bridge loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. For the Kafka Bridge loggers, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 11 JVM configuration options to optimize performance for the Virtual Machine (VM) running the Kafka Bridge. 12 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 13 Optional: Container image configuration, which is recommended only in special situations. 14 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 15 Environment variables are set for distributed tracing. 16 Distributed tracing is enabled by using OpenTelemetry. Additional resources Using the Streams for Apache Kafka Bridge 9.10. Configuring Kafka and ZooKeeper storage Streams for Apache Kafka provides flexibility in configuring the data storage options of Kafka and ZooKeeper. The supported storage types are: Ephemeral (Recommended for development only) Persistent JBOD (Kafka only; not available for ZooKeeper) Tiered storage (Early access) To configure storage, you specify storage properties in the custom resource of the component. The storage type is set using the storage.type property. When using node pools, you can specify storage configuration unique to each node pool used in a Kafka cluster. The same storage properties available to the Kafka resource are also available to the KafkaNodePool pool resource. Tiered storage provides more flexibility for data management by leveraging the parallel use of storage types with different characteristics. For example, tiered storage might include the following: Higher performance and higher cost block storage Lower performance and lower cost object storage Tiered storage is an early access feature in Kafka. To configure tiered storage, you specify tieredStorage properties. Tiered storage is configured only at the cluster level using the Kafka custom resource. The storage-related schema references provide more information on the storage configuration properties: EphemeralStorage schema reference PersistentClaimStorage schema reference JbodStorage schema reference TieredStorageCustom schema reference Warning The storage type cannot be changed after a Kafka cluster is deployed. 9.10.1. Data storage considerations For Streams for Apache Kafka to work well, an efficient data storage infrastructure is essential. We strongly recommend using block storage. Streams for Apache Kafka is only tested for use with block storage. File storage, such as NFS, is not tested and there is no guarantee it will work. Choose one of the following options for your block storage: A cloud-based block storage solution, such as Amazon Elastic Block Store (EBS) Persistent storage using local persistent volumes Storage Area Network (SAN) volumes accessed by a protocol such as Fibre Channel or iSCSI Note Streams for Apache Kafka does not require OpenShift raw block volumes. 9.10.1.1. File systems Kafka uses a file system for storing messages. Streams for Apache Kafka is compatible with the XFS and ext4 file systems, which are commonly used with Kafka. Consider the underlying architecture and requirements of your deployment when choosing and setting up your file system. For more information, refer to Filesystem Selection in the Kafka documentation. 9.10.1.2. Disk usage Use separate disks for Apache Kafka and ZooKeeper. Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. SSDs are particularly effective with ZooKeeper, which requires fast, low latency data access. Note You do not need to provision replicated storage because Kafka and ZooKeeper both have built-in data replication. 9.10.2. Ephemeral storage Ephemeral data storage is transient. All pods on a node share a local ephemeral storage space. Data is retained for as long as the pod that uses it is running. The data is lost when a pod is deleted. Although a pod can recover data in a highly available environment. Because of its transient nature, ephemeral storage is only recommended for development and testing. Ephemeral storage uses emptyDir volumes to store data. An emptyDir volume is created when a pod is assigned to a node. You can set the total amount of storage for the emptyDir using the sizeLimit property . Important Ephemeral storage is not suitable for single-node ZooKeeper clusters or Kafka topics with a replication factor of 1. To use ephemeral storage, you set the storage type configuration in the Kafka or ZooKeeper resource to ephemeral . If you are using node pools, you can also specify ephemeral in the storage configuration of individual node pools. Example ephemeral storage configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: ephemeral # ... zookeeper: storage: type: ephemeral # ... 9.10.2.1. Mount path of Kafka log directories The ephemeral volume is used by Kafka brokers as log directories mounted into the following path: /var/lib/kafka/data/kafka-log IDX Where IDX is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0 . 9.10.3. Persistent storage Persistent data storage retains data in the event of system disruption. For pods that use persistent data storage, data is persisted across pod failures and restarts. Because of its permanent nature, persistent storage is recommended for production environments. To use persistent storage in Streams for Apache Kafka, you specify persistent-claim in the storage configuration of the Kafka or ZooKeeper resources. If you are using node pools, you can also specify persistent-claim in the storage configuration of individual node pools. You configure the resource so that pods use Persistent Volume Claims (PVCs) to make storage requests on persistent volumes (PVs). PVs represent storage volumes that are created on demand and are independent of the pods that use them. The PVC requests the amount of storage required when a pod is being created. The underlying storage infrastructure of the PV does not need to be understood. If a PV matches the storage criteria, the PVC is bound to the PV. You have two options for specifying the storage type: storage.type: persistent-claim If you choose persistent-claim as the storage type, a single persistent storage volume is defined. storage.type: jbod When you select jbod as the storage type, you have the flexibility to define an array of persistent storage volumes using unique IDs. In a production environment, it is recommended to configure the following: For Kafka or node pools, set storage.type to jbod with one or more persistent volumes. For ZooKeeper, set storage.type as persistent-claim for a single persistent volume. Persistent storage also has the following configuration options: id (optional) A storage identification number. This option is mandatory for storage volumes defined in a JBOD storage declaration. Default is 0 . size (required) The size of the persistent volume claim, for example, "1000Gi". class (optional) PVCs can request different types of persistent storage by specifying a StorageClass . Storage classes define storage profiles and dynamically provision PVs based on that profile. If a storage class is not specified, the storage class marked as default in the OpenShift cluster is used. Persistent storage options might include SAN storage types or local persistent volumes . selector (optional) Configuration to specify a specific PV. Provides key:value pairs representing the labels of the volume selected. deleteClaim (optional) Boolean value to specify whether the PVC is deleted when the cluster is uninstalled. Default is false . Warning Increasing the size of persistent volumes in an existing Streams for Apache Kafka cluster is only supported in OpenShift versions that support persistent volume resizing. The persistent volume to be resized must use a storage class that supports volume expansion. For other versions of OpenShift and storage classes that do not support volume expansion, you must decide the necessary storage size before deploying the cluster. Decreasing the size of existing persistent volumes is not possible. Example persistent storage configuration for Kafka and ZooKeeper apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: storage: type: persistent-claim size: 1000Gi # ... Example persistent storage configuration with specific storage class # ... storage: type: persistent-claim size: 500Gi class: my-storage-class # ... Use a selector to specify a labeled persistent volume that provides certain features, such as an SSD. Example persistent storage configuration with selector # ... storage: type: persistent-claim size: 1Gi selector: hdd-type: ssd deleteClaim: true # ... 9.10.3.1. Storage class overrides Instead of using the default storage class, you can specify a different storage class for one or more Kafka or ZooKeeper nodes. This is useful, for example, when storage classes are restricted to different availability zones or data centers. You can use the overrides field for this purpose. In this example, the default storage class is named my-storage-class : Example storage configuration with class overrides apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # ... kafka: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ... # ... zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ... As a result of the configured overrides property, the volumes use the following storage classes: The persistent volumes of ZooKeeper node 0 use my-storage-class-zone-1a . The persistent volumes of ZooKeeper node 1 use my-storage-class-zone-1b . The persistent volumes of ZooKeeper node 2 use my-storage-class-zone-1c . The persistent volumes of Kafka broker 0 use my-storage-class-zone-1a . The persistent volumes of Kafka broker 1 use my-storage-class-zone-1b . The persistent volumes of Kafka broker 2 use my-storage-class-zone-1c . The overrides property is currently used only to override the storage class . Overrides for other storage configuration properties is not currently supported. 9.10.3.2. PVC resources for persistent storage When persistent storage is used, it creates PVCs with the following names: data- cluster-name -kafka- idx PVC for the volume used for storing data for the Kafka broker pod idx . data- cluster-name -zookeeper- idx PVC for the volume used for storing data for the ZooKeeper node pod idx . 9.10.3.3. Mount path of Kafka log directories The persistent volume is used by the Kafka brokers as log directories mounted into the following path: /var/lib/kafka/data/kafka-log IDX Where IDX is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0 . 9.10.4. Resizing persistent volumes Persistent volumes used by a cluster can be resized without any risk of data loss, as long as the storage infrastructure supports it. Following a configuration update to change the size of the storage, Streams for Apache Kafka instructs the storage infrastructure to make the change. Storage expansion is supported in Streams for Apache Kafka clusters that use persistent-claim volumes. Storage reduction is only possible when using multiple disks per broker. You can remove a disk after moving all partitions on the disk to other volumes within the same broker (intra-broker) or to other brokers within the same cluster (intra-cluster). Important You cannot decrease the size of persistent volumes because it is not currently supported in OpenShift. Prerequisites An OpenShift cluster with support for volume resizing. The Cluster Operator is running. A Kafka cluster using persistent volumes created using a storage class that supports volume expansion. Procedure Edit the Kafka resource for your cluster. Change the size property to increase the size of the persistent volume allocated to a Kafka cluster, a ZooKeeper cluster, or both. For Kafka clusters, update the size property under spec.kafka.storage . For ZooKeeper clusters, update the size property under spec.zookeeper.storage . Kafka configuration to increase the volume size to 2000Gi apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: persistent-claim size: 2000Gi class: my-storage-class # ... zookeeper: # ... Create or update the resource: oc apply -f <kafka_configuration_file> OpenShift increases the capacity of the selected persistent volumes in response to a request from the Cluster Operator. When the resizing is complete, the Cluster Operator restarts all pods that use the resized persistent volumes. This happens automatically. Verify that the storage capacity has increased for the relevant pods on the cluster: oc get pv Kafka broker pods with increased storage NAME CAPACITY CLAIM pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-kafka-2 pvc-6e1810be-... 2000Gi my-project/data-my-cluster-kafka-0 pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-kafka-1 The output shows the names of each PVC associated with a broker pod. Additional resources For more information about resizing persistent volumes in OpenShift, see Resizing Persistent Volumes using Kubernetes . 9.10.5. JBOD storage JBOD storage allows you to configure your Kafka cluster to use multiple disks or volumes. This approach provides increased data storage capacity for Kafka brokers, and can lead to performance improvements. A JBOD configuration is defined by one or more volumes, each of which can be either ephemeral or persistent . The rules and constraints for JBOD volume declarations are the same as those for ephemeral and persistent storage. For example, you cannot decrease the size of a persistent storage volume after it has been provisioned, nor can you change the value of sizeLimit when the type is ephemeral . Note JBOD storage is supported for Kafka only , not for ZooKeeper. To use JBOD storage, you set the storage type configuration in the Kafka resource to jbod . If you are using node pools, you can also specify jbod in the storage configuration of individual node pools. The volumes property allows you to describe the disks that make up your JBOD storage array or configuration. Example JBOD storage configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false # ... The IDs cannot be changed once the JBOD volumes are created. You can add or remove volumes from the JBOD configuration. 9.10.5.1. PVC resource for JBOD storage When persistent storage is used to declare JBOD volumes, it creates a PVC with the following name: data- id - cluster-name -kafka- idx PVC for the volume used for storing data for the Kafka broker pod idx . The id is the ID of the volume used for storing data for Kafka broker pod. 9.10.5.2. Mount path of Kafka log directories The JBOD volumes are used by Kafka brokers as log directories mounted into the following path: /var/lib/kafka/data- id /kafka-log idx Where id is the ID of the volume used for storing data for Kafka broker pod idx . For example /var/lib/kafka/data-0/kafka-log0 . 9.10.6. Adding volumes to JBOD storage This procedure describes how to add volumes to a Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. Note When adding a new volume under an id which was already used in the past and removed, you have to make sure that the previously used PersistentVolumeClaims have been deleted. Prerequisites An OpenShift cluster A running Cluster Operator A Kafka cluster with JBOD storage Procedure Edit the spec.kafka.storage.volumes property in the Kafka resource. Add the new volumes to the volumes array. For example, add the new volume with id 2 : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ... Create or update the resource: oc apply -f <kafka_configuration_file> Create new topics or reassign existing partitions to the new disks. Tip Cruise Control is an effective tool for reassigning partitions. To perform an intra-broker disk balance, you set rebalanceDisk to true under the KafkaRebalance.spec . 9.10.7. Removing volumes from JBOD storage This procedure describes how to remove volumes from Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. The JBOD storage always has to contain at least one volume. Important To avoid data loss, you have to move all partitions before removing the volumes. Prerequisites An OpenShift cluster A running Cluster Operator A Kafka cluster with JBOD storage with two or more volumes Procedure Reassign all partitions from the disks which are you going to remove. Any data in partitions still assigned to the disks which are going to be removed might be lost. Tip You can use the kafka-reassign-partitions.sh tool to reassign the partitions. Edit the spec.kafka.storage.volumes property in the Kafka resource. Remove one or more volumes from the volumes array. For example, remove the volumes with ids 1 and 2 : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ... Create or update the resource: oc apply -f <kafka_configuration_file> 9.10.8. Tiered storage (early access) Tiered storage introduces a flexible approach to managing Kafka data whereby log segments are moved to a separate storage system. For example, you can combine the use of block storage on brokers for frequently accessed data and offload older or less frequently accessed data from the block storage to more cost-effective, scalable remote storage solutions, such as Amazon S3, without compromising data accessibility and durability. Warning Tiered storage is an early access Kafka feature, which is also available in Streams for Apache Kafka. Due to its current limitations , it is not recommended for production environments. Tiered storage requires an implementation of Kafka's RemoteStorageManager interface to handle communication between Kafka and the remote storage system, which is enabled through configuration of the Kafka resource. Streams for Apache Kafka uses Kafka's TopicBasedRemoteLogMetadataManager for Remote Log Metadata Management (RLMM) when custom tiered storage is enabled. The RLMM manages the metadata related to remote storage. To use custom tiered storage, do the following: Include a tiered storage plugin for Kafka in the Streams for Apache Kafka image by building a custom container image. The plugin must provide the necessary functionality for a Kafka cluster managed by Streams for Apache Kafka to interact with the tiered storage solution. Configure Kafka for tiered storage using tieredStorage properties in the Kafka resource. Specify the class name and path for the custom RemoteStorageManager implementation, as well as any additional configuration. If required, specify RLMM-specific tiered storage configuration. Example custom tiered storage configuration for Kafka apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: tieredStorage: type: custom 1 remoteStorageManager: 2 className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager classPath: /opt/kafka/plugins/tiered-storage-s3/* config: storage.bucket.name: my-bucket 3 # ... config: rlmm.config.remote.log.metadata.topic.replication.factor: 1 4 # ... 1 The type must be set to custom . 2 The configuration for the custom RemoteStorageManager implementation, including class name and path. 3 Configuration to pass to the custom RemoteStorageManager implementation, which Streams for Apache Kafka automatically prefixes with rsm.config. . 4 Tiered storage configuration to pass to the RLMM, which requires an rlmm.config. prefix. For more information on tiered storage configuration, see the Apache Kafka documentation . 9.11. Configuring CPU and memory resource limits and requests By default, the Streams for Apache Kafka Cluster Operator does not specify CPU and memory resource requests and limits for its deployed operands. Ensuring an adequate allocation of resources is crucial for maintaining stability and achieving optimal performance in Kafka. The ideal resource allocation depends on your specific requirements and use cases. It is recommended to configure CPU and memory resources for each container by setting appropriate requests and limits . 9.12. Configuring pod scheduling To avoid performance degradation caused by resource conflicts between applications scheduled on the same OpenShift node, you can schedule Kafka pods separately from critical workloads. This can be achieved by either selecting specific nodes or dedicating a set of nodes exclusively for Kafka. 9.12.1. Specifying affinity, tolerations, and topology spread constraints Use affinity, tolerations and topology spread constraints to schedule the pods of kafka resources onto nodes. Affinity, tolerations and topology spread constraints are configured using the affinity , tolerations , and topologySpreadConstraint properties in following resources: Kafka.spec.kafka.template.pod Kafka.spec.zookeeper.template.pod Kafka.spec.entityOperator.template.pod KafkaConnect.spec.template.pod KafkaBridge.spec.template.pod KafkaMirrorMaker.spec.template.pod KafkaMirrorMaker2.spec.template.pod The format of the affinity , tolerations , and topologySpreadConstraint properties follows the OpenShift specification. The affinity configuration can include different types of affinity: Pod affinity and anti-affinity Node affinity Additional resources Kubernetes node and pod affinity documentation Kubernetes taints and tolerations Controlling pod placement by using pod topology spread constraints 9.12.1.1. Use pod anti-affinity to avoid critical applications sharing nodes Use pod anti-affinity to ensure that critical applications are never scheduled on the same disk. When running a Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share nodes with other workloads, such as databases. 9.12.1.2. Use node affinity to schedule workloads onto specific nodes The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of Streams for Apache Kafka components to use the right nodes. OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type or custom labels to select the right node. 9.12.1.3. Use node affinity and tolerations for dedicated nodes Use taints to create dedicated nodes, then schedule Kafka pods on the dedicated nodes by configuring node affinity and tolerations. Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability. 9.12.2. Configuring pod anti-affinity to schedule each Kafka broker on a different worker node Many Kafka brokers or ZooKeeper nodes can run on the same OpenShift worker node. If the worker node fails, they will all become unavailable at the same time. To improve reliability, you can use podAntiAffinity configuration to schedule each Kafka broker or ZooKeeper node on a different OpenShift worker node. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the affinity property in the resource specifying the cluster deployment. To make sure that no worker nodes are shared by Kafka brokers or ZooKeeper nodes, use the strimzi.io/name label. Set the topologyKey to kubernetes.io/hostname to specify that the selected pods are not scheduled on nodes with the same hostname. This will still allow the same worker node to be shared by a single Kafka broker and a single ZooKeeper node. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -kafka topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -zookeeper topologyKey: "kubernetes.io/hostname" # ... Where CLUSTER-NAME is the name of your Kafka custom resource. If you even want to make sure that a Kafka broker and ZooKeeper node do not share the same worker node, use the strimzi.io/cluster label. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: "kubernetes.io/hostname" # ... Where CLUSTER-NAME is the name of your Kafka custom resource. Create or update the resource. oc apply -f <kafka_configuration_file> 9.12.3. Configuring pod anti-affinity in Kafka components Pod anti-affinity configuration helps with the stability and performance of Kafka brokers. By using podAntiAffinity , OpenShift will not schedule Kafka brokers on the same nodes as other workloads. Typically, you want to avoid Kafka running on the same worker node as other network or storage intensive applications such as databases, storage or other messaging platforms. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the affinity property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. The topologyKey should be set to kubernetes.io/hostname to specify that the selected pods should not be scheduled on nodes with the same hostname. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f <kafka_configuration_file> 9.12.4. Configuring node affinity in Kafka components Prerequisites An OpenShift cluster A running Cluster Operator Procedure Label the nodes where Streams for Apache Kafka components should be scheduled. This can be done using oc label : oc label node NAME-OF-NODE node-type=fast-network Alternatively, some of the existing labels might be reused. Edit the affinity property in the resource specifying the cluster deployment. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f <kafka_configuration_file> 9.12.5. Setting up dedicated nodes and scheduling pods on them Prerequisites An OpenShift cluster A running Cluster Operator Procedure Select the nodes which should be used as dedicated. Make sure there are no workloads scheduled on these nodes. Set the taints on the selected nodes: This can be done using oc adm taint : oc adm taint node NAME-OF-NODE dedicated=Kafka:NoSchedule Additionally, add a label to the selected nodes as well. This can be done using oc label : oc label node NAME-OF-NODE dedicated=Kafka Edit the affinity and tolerations properties in the resource specifying the cluster deployment. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: tolerations: - key: "dedicated" operator: "Equal" value: "Kafka" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f <kafka_configuration_file> 9.13. Configuring logging levels Configure logging levels in the custom resources of Kafka components and Streams for Apache Kafka operators. You can specify the logging levels directly in the spec.logging property of the custom resource. Or you can define the logging properties in a ConfigMap that's referenced in the custom resource using the configMapKeyRef property. The advantages of using a ConfigMap are that the logging properties are maintained in one place and are accessible to more than one resource. You can also reuse the ConfigMap for more than one resource. If you are using a ConfigMap to specify loggers for Streams for Apache Kafka Operators, you can also append the logging specification to add filters. You specify a logging type in your logging specification: inline when specifying logging levels directly external when referencing a ConfigMap Example inline logging configuration # ... logging: type: inline loggers: kafka.root.logger.level: INFO # ... Example external logging configuration # ... logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key # ... Values for the name and key of the ConfigMap are mandatory. Default logging is used if the name or key is not set. 9.13.1. Logging options for Kafka components and operators For more information on configuring logging for specific Kafka components or operators, see the following sections. Kafka component logging Kafka logging ZooKeeper logging Kafka Connect and MirrorMaker 2 logging MirrorMaker logging Kafka Bridge logging Cruise Control logging Operator logging Cluster Operator logging Topic Operator logging User Operator logging 9.13.2. Creating a ConfigMap for logging To use a ConfigMap to define logging properties, you create the ConfigMap and then reference it as part of the logging definition in the spec of a resource. The ConfigMap must contain the appropriate logging configuration. log4j.properties for Kafka components, ZooKeeper, and the Kafka Bridge log4j2.properties for the Topic Operator and User Operator The configuration must be placed under these properties. In this procedure a ConfigMap defines a root logger for a Kafka resource. Procedure Create the ConfigMap. You can create the ConfigMap as a YAML file or from a properties file. ConfigMap example with a root logger definition for Kafka: kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level="INFO" If you are using a properties file, specify the file at the command line: oc create configmap logging-configmap --from-file=log4j.properties The properties file defines the logging configuration: # Define the logger kafka.root.logger.level="INFO" # ... Define external logging in the spec of the resource, setting the logging.valueFrom.configMapKeyRef.name to the name of the ConfigMap and logging.valueFrom.configMapKeyRef.key to the key in this ConfigMap. # ... logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j.properties # ... Create or update the resource. oc apply -f <kafka_configuration_file> 9.13.3. Configuring Cluster Operator logging Cluster Operator logging is configured through a ConfigMap named strimzi-cluster-operator . A ConfigMap containing logging configuration is created when installing the Cluster Operator. This ConfigMap is described in the file install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml . You configure Cluster Operator logging by changing the data.log4j2.properties values in this ConfigMap . To update the logging configuration, you can edit the 050-ConfigMap-strimzi-cluster-operator.yaml file and then run the following command: oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml Alternatively, edit the ConfigMap directly: oc edit configmap strimzi-cluster-operator With this ConfigMap, you can control various aspects of logging, including the root logger level, log output format, and log levels for different components. The monitorInterval setting, determines how often the logging configuration is reloaded. You can also control the logging levels for the Kafka AdminClient , ZooKeeper ZKTrustManager , Netty, and the OkHttp client. Netty is a framework used in Streams for Apache Kafka for network communication, and OkHttp is a library used for making HTTP requests. If the ConfigMap is missing when the Cluster Operator is deployed, the default logging values are used. If the ConfigMap is accidentally deleted after the Cluster Operator is deployed, the most recently loaded logging configuration is used. Create a new ConfigMap to load a new logging configuration. Note Do not remove the monitorInterval option from the ConfigMap . 9.13.4. Adding logging filters to Streams for Apache Kafka operators If you are using a ConfigMap to configure the (log4j2) logging levels for Streams for Apache Kafka operators, you can also define logging filters to limit what's returned in the log. Logging filters are useful when you have a large number of logging messages. Suppose you set the log level for the logger as DEBUG ( rootLogger.level="DEBUG" ). Logging filters reduce the number of logs returned for the logger at that level, so you can focus on a specific resource. When the filter is set, only log messages matching the filter are logged. Filters use markers to specify what to include in the log. You specify a kind, namespace and name for the marker. For example, if a Kafka cluster is failing, you can isolate the logs by specifying the kind as Kafka , and use the namespace and name of the failing cluster. This example shows a marker filter for a Kafka cluster named my-kafka-cluster . Basic logging filter configuration rootLogger.level="INFO" appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4 1 The MarkerFilter type compares a specified marker for filtering. 2 The onMatch property accepts the log if the marker matches. 3 The onMismatch property rejects the log if the marker does not match. 4 The marker used for filtering is in the format KIND(NAMESPACE/NAME-OF-RESOURCE) . You can create one or more filters. Here, the log is filtered for two Kafka clusters. Multiple logging filter configuration appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster-1) appender.console.filter.filter2.type=MarkerFilter appender.console.filter.filter2.onMatch=ACCEPT appender.console.filter.filter2.onMismatch=DENY appender.console.filter.filter2.marker=Kafka(my-namespace/my-kafka-cluster-2) Adding filters to the Cluster Operator To add filters to the Cluster Operator, update its logging ConfigMap YAML file ( install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml ). Procedure Update the 050-ConfigMap-strimzi-cluster-operator.yaml file to add the filter properties to the ConfigMap. In this example, the filter properties return logs only for the my-kafka-cluster Kafka cluster: kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator data: log4j2.properties: #... appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) Alternatively, edit the ConfigMap directly: oc edit configmap strimzi-cluster-operator If you updated the YAML file instead of editing the ConfigMap directly, apply the changes by deploying the ConfigMap: oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml Adding filters to the Topic Operator or User Operator To add filters to the Topic Operator or User Operator, create or edit a logging ConfigMap. In this procedure a logging ConfigMap is created with filters for the Topic Operator. The same approach is used for the User Operator. Procedure Create the ConfigMap. You can create the ConfigMap as a YAML file or from a properties file. In this example, the filter properties return logs only for the my-topic topic: kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j2.properties: rootLogger.level="INFO" appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic) If you are using a properties file, specify the file at the command line: oc create configmap logging-configmap --from-file=log4j2.properties The properties file defines the logging configuration: # Define the logger rootLogger.level="INFO" # Set the filters appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic) # ... Define external logging in the spec of the resource, setting the logging.valueFrom.configMapKeyRef.name to the name of the ConfigMap and logging.valueFrom.configMapKeyRef.key to the key in this ConfigMap. For the Topic Operator, logging is specified in the topicOperator configuration of the Kafka resource. spec: # ... entityOperator: topicOperator: logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j2.properties Apply the changes by deploying the Cluster Operator: create -f install/cluster-operator -n my-cluster-operator-namespace Additional resources Configuring Kafka Cluster Operator logging Topic Operator logging User Operator logging 9.14. Using ConfigMaps to add configuration Add specific configuration to your Streams for Apache Kafka deployment using ConfigMap resources. ConfigMaps use key-value pairs to store non-confidential data. Configuration data added to ConfigMaps is maintained in one place and can be reused amongst components. ConfigMaps can only store the following types of configuration data: Logging configuration Metrics configuration External configuration for Kafka Connect connectors You can't use ConfigMaps for other areas of configuration. When you configure a component, you can add a reference to a ConfigMap using the configMapKeyRef property. For example, you can use configMapKeyRef to reference a ConfigMap that provides configuration for logging. You might use a ConfigMap to pass a Log4j configuration file. You add the reference to the logging configuration. Example ConfigMap for logging # ... logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key # ... To use a ConfigMap for metrics configuration, you add a reference to the metricsConfig configuration of the component in the same way. ExternalConfiguration properties make data from a ConfigMap (or Secret) mounted to a pod available as environment variables or volumes. You can use external configuration data for the connectors used by Kafka Connect. The data might be related to an external data source, providing the values needed for the connector to communicate with that data source. For example, you can use the configMapKeyRef property to pass configuration data from a ConfigMap as an environment variable. Example ConfigMap providing environment variable values apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key If you are using ConfigMaps that are managed externally, use configuration providers to load the data in the ConfigMaps. 9.14.1. Naming custom ConfigMaps Streams for Apache Kafka creates its own ConfigMaps and other resources when it is deployed to OpenShift. The ConfigMaps contain data necessary for running components. The ConfigMaps created by Streams for Apache Kafka must not be edited. Make sure that any custom ConfigMaps you create do not have the same name as these default ConfigMaps. If they have the same name, they will be overwritten. For example, if your ConfigMap has the same name as the ConfigMap for the Kafka cluster, it will be overwritten when there is an update to the Kafka cluster. Additional resources List of Kafka cluster resources (including ConfigMaps) Logging configuration metricsConfig ExternalConfiguration schema reference Loading configuration values from external sources 9.15. Loading configuration values from external sources Use configuration providers to load configuration data from external sources. The providers operate independently of Streams for Apache Kafka. You can use them to load configuration data for all Kafka components, including producers and consumers. You reference the external source in the configuration of the component and provide access rights. The provider loads data without needing to restart the Kafka component or extracting files, even when referencing a new external source. For example, use providers to supply the credentials for the Kafka Connect connector configuration. The configuration must include any access rights to the external source. 9.15.1. Enabling configuration providers You can enable one or more configuration providers using the config.providers properties in the spec configuration of a component. Example configuration to enable a configuration provider apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider # ... KubernetesSecretConfigProvider Loads configuration data from OpenShift secrets. You specify the name of the secret and the key within the secret where the configuration data is stored. This provider is useful for storing sensitive configuration data like passwords or other user credentials. KubernetesConfigMapConfigProvider Loads configuration data from OpenShift config maps. You specify the name of the config map and the key within the config map where the configuration data is stored. This provider is useful for storing non-sensitive configuration data. EnvVarConfigProvider Loads configuration data from environment variables. You specify the name of the environment variable where the configuration data is stored. This provider is useful for configuring applications running in containers, for example, to load certificates or JAAS configuration from environment variables mapped from secrets. FileConfigProvider Loads configuration data from a file. You specify the path to the file where the configuration data is stored. This provider is useful for loading configuration data from files that are mounted into containers. DirectoryConfigProvider Loads configuration data from files within a directory. You specify the path to the directory where the configuration files are stored. This provider is useful for loading multiple configuration files and for organizing configuration data into separate files. To use KubernetesSecretConfigProvider and KubernetesConfigMapConfigProvider , which are part of the OpenShift Configuration Provider plugin, you must set up access rights to the namespace that contains the configuration file. You can use the other providers without setting up access rights. You can supply connector configuration for Kafka Connect or MirrorMaker 2 in this way by doing the following: Mount config maps or secrets into the Kafka Connect pod as environment variables or volumes Enable EnvVarConfigProvider , FileConfigProvider , or DirectoryConfigProvider in the Kafka Connect or MirrorMaker 2 configuration Pass connector configuration using the externalConfiguration property in the spec of the KafkaConnect or KafkaMirrorMaker2 resource Using providers help prevent the passing of restricted information through the Kafka Connect REST interface. You can use this approach in the following scenarios: Mounting environment variables with the values a connector uses to connect and communicate with a data source Mounting a properties file with values that are used to configure Kafka Connect connectors Mounting files in a directory that contains values for the TLS truststore and keystore used by a connector Note A restart is required when using a new Secret or ConfigMap for a connector, which can disrupt other connectors. Additional resources ExternalConfiguration schema reference 9.15.2. Loading configuration values from secrets or config maps Use the KubernetesSecretConfigProvider to provide configuration properties from a secret or the KubernetesConfigMapConfigProvider to provide configuration properties from a config map. In this procedure, a config map provides configuration properties for a connector. The properties are specified as key values of the config map. The config map is mounted into the Kafka Connect pod as a volume. Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a config map containing the connector configuration. Example config map with connector properties apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2 Procedure Configure the KafkaConnect resource. Enable the KubernetesConfigMapConfigProvider The specification shown here can support loading values from config maps and secrets. Example Kafka Connect configuration to use config maps and secrets apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: secrets,configmaps 1 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 2 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 3 # ... 1 The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from config.providers , taking the form config.providers.USD{alias}.class . 2 KubernetesConfigMapConfigProvider provides values from config maps. 3 KubernetesSecretConfigProvider provides values from secrets. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Create a role that permits access to the values in the external config map. Example role to access values from a config map apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["my-connector-configuration"] verbs: ["get"] # ... The rule gives the role permission to access the my-connector-configuration config map. Create a role binding to permit access to the namespace that contains the config map. Example role binding to access the namespace that contains the config map apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io # ... The role binding gives the role permission to access the my-project namespace. The service account must be the same one used by the Kafka Connect deployment. The service account name format is <cluster_name>-connect , where <cluster_name> is the name of the KafkaConnect custom resource. Reference the config map in the connector configuration. Example connector configuration referencing the config map apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: USD{configmaps:my-project/my-connector-configuration:option1} # ... # ... The placeholder structure is configmaps:<path_and_file_name>:<property> . KubernetesConfigMapConfigProvider reads and extracts the option1 property value from the external config map. 9.15.3. Loading configuration values from environment variables Use the EnvVarConfigProvider to provide configuration properties as environment variables. Environment variables can contain values from config maps or secrets. In this procedure, environment variables provide configuration properties for a connector to communicate with Amazon AWS. The connector must be able to read the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The values of the environment variables are derived from a secret mounted into the Kafka Connect pod. Note The names of user-defined environment variables cannot start with KAFKA_ or STRIMZI_ . Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a secret containing the connector configuration. Example secret with values for environment variables apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE= Procedure Configure the KafkaConnect resource. Enable the EnvVarConfigProvider Specify the environment variables using the externalConfiguration property. Example Kafka Connect configuration to use external environment variables apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env 1 config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider 2 # ... externalConfiguration: env: - name: AWS_ACCESS_KEY_ID 3 valueFrom: secretKeyRef: name: aws-creds 4 key: awsAccessKey 5 - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey # ... 1 The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from config.providers , taking the form config.providers.USD{alias}.class . 2 EnvVarConfigProvider provides values from environment variables. 3 The environment variable takes a value from the secret. 4 The name of the secret containing the environment variable. 5 The name of the key stored in the secret. Note The secretKeyRef property references keys in a secret. If you are using a config map instead of a secret, use the configMapKeyRef property. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Reference the environment variable in the connector configuration. Example connector configuration referencing the environment variable apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: USD{env:AWS_ACCESS_KEY_ID} option: USD{env:AWS_SECRET_ACCESS_KEY} # ... # ... The placeholder structure is env:<environment_variable_name> . EnvVarConfigProvider reads and extracts the environment variable values from the mounted secret. 9.15.4. Loading configuration values from a file within a directory Use the FileConfigProvider to provide configuration properties from a file within a directory. Files can be config maps or secrets. In this procedure, a file provides configuration properties for a connector. A database name and password are specified as properties of a secret. The secret is mounted to the Kafka Connect pod as a volume. Volumes are mounted on the path /opt/kafka/external-configuration/<volume-name> . Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a secret containing the connector configuration. Example secret with database properties apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password 1 The connector configuration in properties file format. 2 Database username and password properties used in the configuration. Procedure Configure the KafkaConnect resource. Enable the FileConfigProvider Specify the file using the externalConfiguration property. Example Kafka Connect configuration to use an external property file apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 #... externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4 1 The alias for the configuration provider is used to define other configuration parameters. 2 FileConfigProvider provides values from properties files. The parameter uses the alias from config.providers , taking the form config.providers.USD{alias}.class . 3 The name of the volume containing the secret. 4 The name of the secret. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Reference the file properties in the connector configuration as placeholders. Example connector configuration referencing the file apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: "3306" database.user: "USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbUsername}" database.password: "USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbPassword}" database.server.id: "184054" #... The placeholder structure is file:<path_and_file_name>:<property> . FileConfigProvider reads and extracts the database username and password property values from the mounted secret. 9.15.5. Loading configuration values from multiple files within a directory Use the DirectoryConfigProvider to provide configuration properties from multiple files within a directory. Files can be config maps or secrets. In this procedure, a secret provides the TLS keystore and truststore user credentials for a connector. The credentials are in separate files. The secrets are mounted into the Kafka Connect pod as volumes. Volumes are mounted on the path /opt/kafka/external-configuration/<volume-name> . Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a secret containing the user credentials. Example secret with user credentials apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store The my-user secret provides the keystore credentials ( user.crt and user.key ) for the connector. The <cluster_name>-cluster-ca-cert secret generated when deploying the Kafka cluster provides the cluster CA certificate as truststore credentials ( ca.crt ). Procedure Configure the KafkaConnect resource. Enable the DirectoryConfigProvider Specify the files using the externalConfiguration property. Example Kafka Connect configuration to use external property files apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: directory 1 config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 2 #... externalConfiguration: volumes: 3 - name: cluster-ca 4 secret: secretName: my-cluster-cluster-ca-cert 5 - name: my-user secret: secretName: my-user 6 1 The alias for the configuration provider is used to define other configuration parameters. 2 DirectoryConfigProvider provides values from files in a directory. The parameter uses the alias from config.providers , taking the form config.providers.USD{alias}.class . 3 The names of the volumes containing the secrets. 4 The name of the secret for the cluster CA certificate to supply truststore configuration. 5 The name of the secret for the user to supply keystore configuration. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Reference the file properties in the connector configuration as placeholders. Example connector configuration referencing the files apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: # ... database.history.producer.security.protocol: SSL database.history.producer.ssl.truststore.type: PEM database.history.producer.ssl.truststore.certificates: "USD{directory:/opt/kafka/external-configuration/cluster-ca:ca.crt}" database.history.producer.ssl.keystore.type: PEM database.history.producer.ssl.keystore.certificate.chain: "USD{directory:/opt/kafka/external-configuration/my-user:user.crt}" database.history.producer.ssl.keystore.key: "USD{directory:/opt/kafka/external-configuration/my-user:user.key}" #... The placeholder structure is directory:<path>:<file_name> . DirectoryConfigProvider reads and extracts the credentials from the mounted secrets. 9.16. Customizing OpenShift resources A Streams for Apache Kafka deployment creates OpenShift resources, such as Deployment , Pod , and Service resources. These resources are managed by Streams for Apache Kafka operators. Only the operator that is responsible for managing a particular OpenShift resource can change that resource. If you try to manually change an operator-managed OpenShift resource, the operator will revert your changes back. Changing an operator-managed OpenShift resource can be useful if you want to perform certain tasks, such as the following: Adding custom labels or annotations that control how Pods are treated by Istio or other services Managing how Loadbalancer -type Services are created by the cluster To make the changes to an OpenShift resource, you can use the template property within the spec section of various Streams for Apache Kafka custom resources. Here is a list of the custom resources where you can apply the changes: Kafka.spec.kafka Kafka.spec.zookeeper Kafka.spec.entityOperator Kafka.spec.kafkaExporter Kafka.spec.cruiseControl KafkaNodePool.spec KafkaConnect.spec KafkaMirrorMaker.spec KafkaMirrorMaker2.spec KafkaBridge.spec KafkaUser.spec For more information about these properties, see the Streams for Apache Kafka Custom Resource API Reference . The Streams for Apache Kafka Custom Resource API Reference provides more details about the customizable fields. In the following example, the template property is used to modify the labels in a Kafka broker's pod. Example template customization apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # ... template: pod: metadata: labels: mylabel: myvalue # ... 9.16.1. Customizing the image pull policy Streams for Apache Kafka allows you to customize the image pull policy for containers in all pods deployed by the Cluster Operator. The image pull policy is configured using the environment variable STRIMZI_IMAGE_PULL_POLICY in the Cluster Operator deployment. The STRIMZI_IMAGE_PULL_POLICY environment variable can be set to three different values: Always Container images are pulled from the registry every time the pod is started or restarted. IfNotPresent Container images are pulled from the registry only when they were not pulled before. Never Container images are never pulled from the registry. Currently, the image pull policy can only be customized for all Kafka, Kafka Connect, and Kafka MirrorMaker clusters at once. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. Additional resources Disruptions . 9.16.2. Applying a termination grace period Apply a termination grace period to give a Kafka cluster enough time to shut down cleanly. Specify the time using the terminationGracePeriodSeconds property. Add the property to the template.pod configuration of the Kafka custom resource. The time you add will depend on the size of your Kafka cluster. The OpenShift default for the termination grace period is 30 seconds. If you observe that your clusters are not shutting down cleanly, you can increase the termination grace period. A termination grace period is applied every time a pod is restarted. The period begins when OpenShift sends a term (termination) signal to the processes running in the pod. The period should reflect the amount of time required to transfer the processes of the terminating pod to another pod before they are stopped. After the period ends, a kill signal stops any processes still running in the pod. The following example adds a termination grace period of 120 seconds to the Kafka custom resource. You can also specify the configuration in the custom resources of other Kafka components. Example termination grace period configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... template: pod: terminationGracePeriodSeconds: 120 # ... # ...
|
[
"apply -f <kafka_configuration_file>",
"examples ├── user 1 ├── topic 2 ├── security 3 │ ├── tls-auth │ ├── scram-sha-512-auth │ └── keycloak-authorization ├── mirror-maker 4 ├── metrics 5 ├── kafka 6 │ └── nodepools 7 ├── cruise-control 8 ├── connect 9 └── bridge 10",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 1 version: 3.7.0 2 logging: 3 type: inline loggers: kafka.root.logger.level: INFO resources: 4 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\" readinessProbe: 5 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 6 -Xms: 8192m -Xmx: 8192m image: my-org/my-image:latest 7 listeners: 8 - name: plain 9 port: 9092 10 type: internal 11 tls: false 12 configuration: useServiceDnsDomain: true 13 - name: tls port: 9093 type: internal tls: true authentication: 14 type: tls - name: external1 15 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 16 secretName: my-secret certificate: my-certificate.crt key: my-key.key authorization: 17 type: simple config: 18 auto.create.topics.enable: \"false\" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: \"3.7\" storage: 19 type: persistent-claim 20 size: 10000Gi rack: 21 topologyKey: topology.kubernetes.io/zone metricsConfig: 22 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 23 name: my-config-map key: my-key # zookeeper: 24 replicas: 3 25 logging: 26 type: inline loggers: zookeeper.root.logger: INFO resources: requests: memory: 8Gi cpu: \"2\" limits: memory: 8Gi cpu: \"2\" jvmOptions: -Xms: 4096m -Xmx: 4096m storage: type: persistent-claim size: 1000Gi metricsConfig: # entityOperator: 27 tlsSidecar: 28 resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 29 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 30 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" kafkaExporter: 31 # cruiseControl: 32 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6",
"apply -f <kafka_configuration_file>",
"annotate pod <cluster_name>-kafka-<index_number> strimzi.io/delete-pod-and-pvc=\"true\"",
"annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/delete-pod-and-pvc=\"true\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kraft-dual-role 1 labels: strimzi.io/cluster: my-cluster 2 spec: replicas: 3 3 roles: 4 - controller - broker storage: 5 type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: 6 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker 1 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: config: reserved.broker.max.id: 10000 #",
"annotate kafkanodepool pool-a strimzi.io/next-node-ids=\"[0,1,2,10-20,30]\"",
"annotate kafkanodepool pool-b strimzi.io/remove-node-ids=\"[60-50,9,8,7]\"",
"annotate kafkanodepool pool-a strimzi.io/next-node-ids-",
"annotate kafkanodepool pool-b strimzi.io/remove-node-ids-",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0",
"scale kafkanodepool pool-a --replicas=4",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: add-brokers brokers: [3]",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [3]",
"scale kafkanodepool pool-a --replicas=3",
"NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-0 1/1 Running 0 my-cluster-pool-b-kafka-1 1/1 Running 0 my-cluster-pool-b-kafka-2 1/1 Running 0",
"scale kafkanodepool pool-a --replicas=4",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0 my-cluster-pool-a-7 1/1 Running 0 my-cluster-pool-b-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 my-cluster-pool-b-6 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [6]",
"scale kafkanodepool pool-b --replicas=3",
"NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-2 1/1 Running 0 my-cluster-pool-b-kafka-3 1/1 Running 0 my-cluster-pool-b-kafka-5 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false #",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [0, 1, 2]",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # --- apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [3, 4, 5]",
"delete kafkanodepool pool-b -n <my_cluster_operator_namespace>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: gp2-ebs #",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: roles: - broker replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 1Ti class: gp3-ebs #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [0, 1, 2]",
"delete kafkanodepool pool-a",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: all-zones provisioner: kubernetes.io/my-storage parameters: type: ssd volumeBindingMode: WaitForFirstConsumer",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-1 labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-1 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-2 labels: strimzi.io/cluster: my-cluster spec: replicas: 4 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-2 #",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-zone-1-kafka-0 1/1 Running 0 my-cluster-pool-zone-1-kafka-1 1/1 Running 0 my-cluster-pool-zone-1-kafka-2 1/1 Running 0 my-cluster-pool-zone-2-kafka-3 1/1 Running 0 my-cluster-pool-zone-2-kafka-4 1/1 Running 0 my-cluster-pool-zone-2-kafka-5 1/1 Running 0 my-cluster-pool-zone-2-kafka-6 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kafka labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false",
"apply -f <node_pool_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled spec: kafka: # zookeeper: #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {}",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #",
"env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2",
"env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: \"^key1.*\"",
"env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2",
"env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64",
"<cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local",
"# env: # - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #",
"# env: # - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: \"120000\" #",
"annotate <kind_of_custom_resource> <name_of_custom_resource> strimzi.io/pause-reconciliation=\"true\"",
"annotate KafkaConnect my-connect strimzi.io/pause-reconciliation=\"true\"",
"describe <kind_of_custom_resource> <name_of_custom_resource>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: \"true\" strimzi.io/use-connector-resources: \"true\" creationTimestamp: 2021-03-12T10:47:11Z # spec: # status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: \"True\" type: ReconciliationPaused",
"env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 3",
"spec containers: - name: strimzi-cluster-operator # env: - name: STRIMZI_LEADER_ELECTION_ENABLED value: \"true\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: \"my-strimzi-cluster-operator\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resourceNames: - my-strimzi-cluster-operator",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi subjects: - kind: ServiceAccount name: my-strimzi-cluster-operator namespace: myproject",
"create -f install/cluster-operator -n myproject",
"get deployments -n myproject",
"NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"HTTP_PROXY\" value: \"http://proxy.com\" 1 - name: \"HTTPS_PROXY\" value: \"https://proxy.com\" 2 - name: \"NO_PROXY\" value: \"internal.com, other.domain.com\" 3 #",
"edit deployment strimzi-cluster-operator",
"create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"FIPS_MODE\" value: \"disabled\" 1 #",
"edit deployment strimzi-cluster-operator",
"apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 2 spec: replicas: 3 3 authentication: 4 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source bootstrapServers: my-cluster-kafka-bootstrap:9092 5 tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt config: 7 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 build: 8 output: 9 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 10 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> externalConfiguration: 11 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey resources: 12 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 13 type: inline loggers: log4j.rootLogger: INFO readinessProbe: 14 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 15 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 16 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 17 rack: topologyKey: topology.kubernetes.io/zone 18 template: 19 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 20 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 21",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # cluster group - resource: type: group name: connect-cluster patternType: literal operations: - Read host: \"*\"",
"apply -f KAFKA-USER-CONFIG-FILE",
"get KafkaConnector",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector tasksMax: 2 config: file: \"/opt/kafka/LICENSE\" topic: my-topic state: stopped #",
"get KafkaConnector",
"annotate KafkaConnector <kafka_connector_name> strimzi.io/restart=\"true\"",
"get KafkaConnector",
"describe KafkaConnector <kafka_connector_name>",
"annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task=\"0\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: {}",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 1 replicas: 3 2 connectCluster: \"my-cluster-target\" 3 clusters: 4 - alias: \"my-cluster-source\" 5 authentication: 6 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 7 tls: 8 trustedCertificates: - certificate: ca.crt secretName: my-cluster-source-cluster-ca-cert - alias: \"my-cluster-target\" 9 authentication: 10 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 11 config: 12 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 tls: 13 trustedCertificates: - certificate: ca.crt secretName: my-cluster-target-cluster-ca-cert mirrors: 14 - sourceCluster: \"my-cluster-source\" 15 targetCluster: \"my-cluster-target\" 16 sourceConnector: 17 tasksMax: 10 18 autoRestart: 19 enabled: true config replication.factor: 1 20 offset-syncs.topic.replication.factor: 1 21 sync.topic.acls.enabled: \"false\" 22 refresh.topics.interval.seconds: 60 23 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" 24 heartbeatConnector: 25 autoRestart: enabled: true config: heartbeats.topic.replication.factor: 1 26 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" checkpointConnector: 27 autoRestart: enabled: true config: checkpoints.topic.replication.factor: 1 28 refresh.groups.interval.seconds: 600 29 sync.group.offsets.enabled: true 30 sync.group.offsets.interval.seconds: 60 31 emit.checkpoints.interval.seconds: 60 32 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" topicsPattern: \"topic1|topic2|topic3\" 33 groupsPattern: \"group1|group2|group3\" 34 resources: 35 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 36 type: inline loggers: connect.root.logger.level: INFO readinessProbe: 37 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 38 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 39 rack: topologyKey: topology.kubernetes.io/zone 40 template: 41 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 42 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 43 externalConfiguration: 44 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-target\" config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 5 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 producer.request.timeout.ms: 30000 consumer.fetch.max.bytes: 52428800 # checkpointConnector: config: producer.override.request.timeout.ms: 30000 consumer.max.poll.interval.ms: 300000 # heartbeatConnector: config: producer.override.request.timeout.ms: 30000 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 10 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" checkpointConnector: tasksMax: 10 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-source-cluster spec: kafka: version: 3.7.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.7\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-target-cluster spec: kafka: version: 3.7.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.7\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}",
"apply -f <kafka_configuration_file> -n <namespace>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-source-user labels: strimzi.io/cluster: my-source-cluster spec: authentication: type: tls authorization: type: simple acls: # MirrorSourceConnector - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Create - DescribeConfigs - Read - Write - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - DescribeConfigs - Read # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: # Needed for every group for which offsets are synced type: group name: \"*\" operations: - Describe - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Read",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-target-user labels: strimzi.io/cluster: my-target-cluster spec: authentication: type: tls authorization: type: simple acls: # cluster group - resource: type: group name: mirrormaker2-cluster operations: - Read # access to config.storage.topic - resource: type: topic name: mirrormaker2-cluster-configs operations: - Create - Describe - DescribeConfigs - Read - Write # access to status.storage.topic - resource: type: topic name: mirrormaker2-cluster-status operations: - Create - Describe - DescribeConfigs - Read - Write # access to offset.storage.topic - resource: type: topic name: mirrormaker2-cluster-offsets operations: - Create - Describe - DescribeConfigs - Read - Write # MirrorSourceConnector - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - Create - Alter - AlterConfigs - Write # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: type: topic name: my-source-cluster.checkpoints.internal operations: - Create - Describe - Read - Write - resource: # Needed for every group for which the offset is synced type: group name: \"*\" operations: - Read - Describe # MirrorHeartbeatConnector - resource: type: topic name: heartbeats operations: - Create - Describe - Write",
"apply -f <kafka_user_configuration_file> -n <namespace>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker-2 spec: version: 3.7.0 replicas: 1 connectCluster: \"my-target-cluster\" clusters: - alias: \"my-source-cluster\" bootstrapServers: my-source-cluster-kafka-bootstrap:9093 tls: 1 trustedCertificates: - secretName: my-source-cluster-cluster-ca-cert certificate: ca.crt authentication: 2 type: tls certificateAndKey: secretName: my-source-user certificate: user.crt key: user.key - alias: \"my-target-cluster\" bootstrapServers: my-target-cluster-kafka-bootstrap:9093 tls: 3 trustedCertificates: - secretName: my-target-cluster-cluster-ca-cert certificate: ca.crt authentication: 4 type: tls certificateAndKey: secretName: my-target-user certificate: user.crt key: user.key config: # -1 means it will use the default replication factor configured in the broker config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 mirrors: - sourceCluster: \"my-source-cluster\" targetCluster: \"my-target-cluster\" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: \"false\" heartbeatConnector: config: heartbeats.topic.replication.factor: 1 checkpointConnector: config: checkpoints.topic.replication.factor: 1 sync.group.offsets.enabled: \"true\" topicsPattern: \"topic1|topic2|topic3\" groupsPattern: \"group1|group2|group3\"",
"apply -f <mirrormaker2_configuration_file> -n <namespace_of_target_cluster>",
"get KafkaMirrorMaker2",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 replicas: 3 connectCluster: \"my-cluster-target\" clusters: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 10 autoRestart: enabled: true state: stopped #",
"get KafkaMirrorMaker2",
"describe KafkaMirrorMaker2 <mirrormaker_cluster_name>",
"annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> \"strimzi.io/restart-connector=<mirrormaker_connector_name>\"",
"annotate KafkaMirrorMaker2 my-mirror-maker-2 \"strimzi.io/restart-connector=my-connector\"",
"get KafkaMirrorMaker2",
"describe KafkaMirrorMaker2 <mirrormaker_cluster_name>",
"annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> \"strimzi.io/restart-connector-task=<mirrormaker_connector_name>:<task_id>\"",
"annotate KafkaMirrorMaker2 my-mirror-maker-2 \"strimzi.io/restart-connector-task=my-connector:0\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: \"my-group\" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert certificate: ca.crt authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 9 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert certificate: ca.crt authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 include: \"my-topic|other-topic\" 10 resources: 11 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 12 type: inline loggers: mirrormaker.root.logger: INFO readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 15 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 16 template: 17 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" mirrorMakerContainer: 18 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: 19 type: opentelemetry",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: replicas: 3 1 bootstrapServers: <cluster_name> -cluster-kafka-bootstrap:9092 2 tls: 3 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt authentication: 4 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key http: 5 port: 8080 cors: 6 allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" consumer: 7 config: auto.offset.reset: earliest producer: 8 config: delivery.timeout.ms: 300000 resources: 9 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 10 type: inline loggers: logger.bridge.level: INFO # enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: DEBUG jvmOptions: 11 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 image: my-org/my-image:latest 13 template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" bridgeContainer: 15 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 16",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: ephemeral # zookeeper: storage: type: ephemeral #",
"/var/lib/kafka/data/kafka-log IDX",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: storage: type: persistent-claim size: 1000Gi #",
"storage: type: persistent-claim size: 500Gi class: my-storage-class",
"storage: type: persistent-claim size: 1Gi selector: hdd-type: ssd deleteClaim: true",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # kafka: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # # zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c #",
"/var/lib/kafka/data/kafka-log IDX",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: persistent-claim size: 2000Gi class: my-storage-class # zookeeper: #",
"apply -f <kafka_configuration_file>",
"get pv",
"NAME CAPACITY CLAIM pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-kafka-2 pvc-6e1810be-... 2000Gi my-project/data-my-cluster-kafka-0 pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-kafka-1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false #",
"/var/lib/kafka/data- id /kafka-log idx",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: tieredStorage: type: custom 1 remoteStorageManager: 2 className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager classPath: /opt/kafka/plugins/tiered-storage-s3/* config: storage.bucket.name: my-bucket 3 # config: rlmm.config.remote.log.metadata.topic.replication.factor: 1 4 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -kafka topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -zookeeper topologyKey: \"kubernetes.io/hostname\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" # zookeeper: #",
"apply -f <kafka_configuration_file>",
"label node NAME-OF-NODE node-type=fast-network",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # zookeeper: #",
"apply -f <kafka_configuration_file>",
"adm taint node NAME-OF-NODE dedicated=Kafka:NoSchedule",
"label node NAME-OF-NODE dedicated=Kafka",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: tolerations: - key: \"dedicated\" operator: \"Equal\" value: \"Kafka\" effect: \"NoSchedule\" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # zookeeper: #",
"apply -f <kafka_configuration_file>",
"logging: type: inline loggers: kafka.root.logger.level: INFO",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key",
"kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level=\"INFO\"",
"create configmap logging-configmap --from-file=log4j.properties",
"Define the logger kafka.root.logger.level=\"INFO\"",
"logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j.properties",
"apply -f <kafka_configuration_file>",
"create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml",
"edit configmap strimzi-cluster-operator",
"rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4",
"appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster-1) appender.console.filter.filter2.type=MarkerFilter appender.console.filter.filter2.onMatch=ACCEPT appender.console.filter.filter2.onMismatch=DENY appender.console.filter.filter2.marker=Kafka(my-namespace/my-kafka-cluster-2)",
"kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator data: log4j2.properties: # appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster)",
"edit configmap strimzi-cluster-operator",
"create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml",
"kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j2.properties: rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)",
"create configmap logging-configmap --from-file=log4j2.properties",
"Define the logger rootLogger.level=\"INFO\" Set the filters appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)",
"spec: # entityOperator: topicOperator: logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j2.properties",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider #",
"apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: secrets,configmaps 1 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 2 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 3 #",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"my-connector-configuration\"] verbs: [\"get\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{configmaps:my-project/my-connector-configuration:option1} #",
"apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env 1 config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider 2 # externalConfiguration: env: - name: AWS_ACCESS_KEY_ID 3 valueFrom: secretKeyRef: name: aws-creds 4 key: awsAccessKey 5 - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey #",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{env:AWS_ACCESS_KEY_ID} option: USD{env:AWS_SECRET_ACCESS_KEY} #",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 # externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: \"3306\" database.user: \"USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbUsername}\" database.password: \"USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbPassword}\" database.server.id: \"184054\" #",
"apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: directory 1 config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 2 # externalConfiguration: volumes: 3 - name: cluster-ca 4 secret: secretName: my-cluster-cluster-ca-cert 5 - name: my-user secret: secretName: my-user 6",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: # database.history.producer.security.protocol: SSL database.history.producer.ssl.truststore.type: PEM database.history.producer.ssl.truststore.certificates: \"USD{directory:/opt/kafka/external-configuration/cluster-ca:ca.crt}\" database.history.producer.ssl.keystore.type: PEM database.history.producer.ssl.keystore.certificate.chain: \"USD{directory:/opt/kafka/external-configuration/my-user:user.crt}\" database.history.producer.ssl.keystore.key: \"USD{directory:/opt/kafka/external-configuration/my-user:user.key}\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # template: pod: metadata: labels: mylabel: myvalue #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: pod: terminationGracePeriodSeconds: 120 # #"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/overview-str
|
4.229. pinentry
|
4.229. pinentry 4.229.1. RHBA-2011:1096 - pinentry bug fix update Updated pinentry packages that fix one bug are now available for Red Hat Enterprise Linux 6. The pinentry package contains a collection of simple PIN or password entry dialogs, which utilize the Assuan protocol as described by the Project Aegypten. The pinentry package also contains the command line version of the PIN entry dialog. Bug Fix BZ# 677665 Prior to this update, there was a problem when entering a password using the pinentry-curses utility; an error message was displayed instead of the password entry dialog if pinentry-curses was run under a user different from the user who owned the current tty. This bug has been fixed in this update so that no error message is now displayed and pinentry-curses asks for a password as expected. All users of pinentry are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/pinentry
|
2.3. diskdevstat and netdevstat
|
2.3. diskdevstat and netdevstat diskdevstat and netdevstat are SystemTap tools that collect detailed information about the disk activity and network activity of all applications running on a system. These tools were inspired by PowerTOP , which shows the number of CPU wakeups by every application per second (refer to Section 2.2, "PowerTOP" ). The statistics that these tools collect allow you to identify applications that waste power with many small I/O operations rather than fewer, larger operations. Other monitoring tools that measure only transfer speeds do not help to identify this type of usage. Install these tools with SystemTap with the following command as root : Run the tools with the command: or the command: Both commands can take up to three parameters, as follows: update_interval The time in seconds between updates of the display. Default: 5 total_duration The time in seconds for the whole run. Default: 86400 (1 day) display_histogram Flag whether to histogram for all the collected data at the end of the run. The output of the diskdevstat command resembles that of PowerTOP . See the example. Example 2.1. An Output of the diskdevstat Command Here is sample output from a longer diskdevstat run: Three applications stand out: These three applications have a WRITE_CNT greater than 0 , which means that they performed some form of write during the measurement. Of those, plasma was the worst offender by a large degree: it performed the most write operations, and the average time between writes was the lowest. Plasma would therefore be the best candidate to investigate if you were concerned about power-inefficient applications. Use the strace and ltrace commands to examine applications more closely by tracing all system calls of the given process ID. Run: The output of strace contains a repeating pattern every 45 seconds that opened the KDE icon cache file of the user for writing followed by an immediate close of the file again. This led to a necessary physical write to the hard disk as the file metadata (specifically, the modification time) had changed. The final fix was to prevent those unnecessary calls when no updates to the icons had occurred. For reference on what the columns in the diskdevstat command stand for, see this table: Table 2.1. Reading the diskdevstat Output PID the process ID of the application UID the user ID under which the applications is running DEV the device on which the I/O took place WRITE_CNT the total number of write operations WRITE_MIN the lowest time taken for two consecutive writes (in seconds) WRITE_MAX the greatest time taken for two consecutive writes (in seconds) WRITE_AVG the average time taken for two consecutive writes (in seconds) READ_CNT the total number of read operations READ_MIN the lowest time taken for two consecutive reads (in seconds) READ_MAX the greatest time taken for two consecutive reads (in seconds) READ_AVG the average time taken for two consecutive reads (in seconds) COMMAND the name of the process
|
[
"install systemtap tuned-utils kernel-debuginfo",
"diskdevstat",
"netdevstat",
"diskdevstat update_interval total_duration display_histogram netdevstat update_interval total_duration display_histogram",
"PID UID DEV WRITE_CNT WRITE_MIN WRITE_MAX WRITE_AVG READ_CNT READ_MIN READ_MAX READ_AVG COMMAND 2789 2903 sda1 854 0.000 120.000 39.836 0 0.000 0.000 0.000 plasma 5494 0 sda1 0 0.000 0.000 0.000 758 0.000 0.012 0.000 0logwatch 5520 0 sda1 0 0.000 0.000 0.000 140 0.000 0.009 0.000 perl 5549 0 sda1 0 0.000 0.000 0.000 140 0.000 0.009 0.000 perl 5585 0 sda1 0 0.000 0.000 0.000 108 0.001 0.002 0.000 perl 2573 0 sda1 63 0.033 3600.015 515.226 0 0.000 0.000 0.000 auditd 5429 0 sda1 0 0.000 0.000 0.000 62 0.009 0.009 0.000 crond 5379 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5473 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5415 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5433 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5425 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 5375 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5477 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 5469 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 5419 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5481 0 sda1 0 0.000 0.000 0.000 61 0.000 0.001 0.000 crond 5355 0 sda1 0 0.000 0.000 0.000 37 0.000 0.014 0.001 laptop_mode 2153 0 sda1 26 0.003 3600.029 1290.730 0 0.000 0.000 0.000 rsyslogd 5575 0 sda1 0 0.000 0.000 0.000 16 0.000 0.000 0.000 cat 5581 0 sda1 0 0.000 0.000 0.000 12 0.001 0.002 0.000 perl [output truncated]",
"PID UID DEV WRITE_CNT WRITE_MIN WRITE_MAX WRITE_AVG READ_CNT READ_MIN READ_MAX READ_AVG COMMAND 2789 2903 sda1 854 0.000 120.000 39.836 0 0.000 0.000 0.000 plasma 2573 0 sda1 63 0.033 3600.015 515.226 0 0.000 0.000 0.000 auditd 2153 0 sda1 26 0.003 3600.029 1290.730 0 0.000 0.000 0.000 rsyslogd",
"strace -p 2789"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/diskdevstat_and_netdevstat
|
Chapter 19. Remove Red Hat JBoss Data Grid
|
Chapter 19. Remove Red Hat JBoss Data Grid 19.1. Remove Red Hat JBoss Data Grid from Your Linux System The following procedures contain instructions to remove Red Hat JBoss Data Grid from your Linux system. Warning Once deleted, all JBoss Data Grid configuration and settings are permanently lost. Procedure 19.1. Remove JBoss Data Grid from Your Linux System Shut Down Server Ensure that the JBoss Data Grid server is shut down. Navigate to the JBoss Data Grid Home Directory Use the command line to change into the level above the USDJDG_HOME folder. Delete the JBoss Data Grid Home Directory Enter the following command in the terminal to remove JBoss Data Grid, replacing USDJDG_HOME with the name of your JBoss Data Grid home directory: Report a bug
|
[
"rm -Rf USDJDG_HOME"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-remove_red_hat_jboss_data_grid
|
Chapter 10. Using Red Hat subscriptions in builds
|
Chapter 10. Using Red Hat subscriptions in builds Use the following sections to install Red Hat subscription content within OpenShift Container Platform builds. 10.1. Creating an image stream tag for the Red Hat Universal Base Image To install Red Hat Enterprise Linux (RHEL) packages within a build, you can create an image stream tag to reference the Red Hat Universal Base Image (UBI). To make the UBI available in every project in the cluster, add the image stream tag to the openshift namespace. Otherwise, to make it available in a specific project , add the image stream tag to that project. Image stream tags grant access to the UBI by using the registry.redhat.io credentials that are present in the install pull secret, without exposing the pull secret to other users. This method is more convenient than requiring each developer to install pull secrets with registry.redhat.io credentials in each project. Procedure To create an ImageStreamTag resource in the openshift namespace, so it is available to developers in all projects, enter the following command: USD oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi9:latest -n openshift Tip You can alternatively apply the following YAML to create an ImageStreamTag resource in the openshift namespace: apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source To create an ImageStreamTag resource in a single project, enter the following command: USD oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest Tip You can alternatively apply the following YAML to create an ImageStreamTag resource in a single project: apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source 10.2. Adding subscription entitlements as a build secret Builds that use Red Hat subscriptions to install content must include the entitlement keys as a build secret. Prerequisites You must have access to Red Hat Enterprise Linux (RHEL) package repositories through your subscription. The entitlement secret to access these repositories is automatically created by the Insights Operator when your cluster is subscribed. You must have access to the cluster as a user with the cluster-admin role or you have permission to access secrets in the openshift-config-managed project. Procedure Copy the entitlement secret from the openshift-config-managed namespace to the namespace of the build by entering the following commands: USD cat << EOF > secret-template.txt kind: Secret apiVersion: v1 metadata: name: etc-pki-entitlement type: Opaque data: {{ range \USDkey, \USDvalue := .data }} {{ \USDkey }}: {{ \USDvalue }} {{ end }} EOF USD oc get secret etc-pki-entitlement -n openshift-config-managed -o=go-template-file --template=secret-template.txt | oc apply -f - Add the etc-pki-entitlement secret as a build volume in the build configuration's Docker strategy: strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement 10.3. Running builds with Subscription Manager 10.3.1. Docker builds using Subscription Manager Docker strategy builds can use yum or dnf to install additional Red Hat Enterprise Linux (RHEL) packages. Prerequisites The entitlement keys must be added as build strategy volumes. Procedure Use the following as an example Dockerfile to install content with the Subscription Manager: FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \ 2 nss_wrapper \ uid_wrapper -y && \ yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 1 You must include the command to remove the /etc/rhsm-host directory and all its contents in your Dockerfile before executing any yum or dnf commands. 2 Use the Red Hat Package Browser to find the correct repositories for your installed packages. 3 You must restore the /etc/rhsm-host symbolic link to keep your image compatible with other Red Hat container images. 10.4. Running builds with Red Hat Satellite subscriptions 10.4.1. Adding Red Hat Satellite configurations to builds Builds that use Red Hat Satellite to install content must provide appropriate configurations to obtain content from Satellite repositories. Prerequisites You must provide or create a yum -compatible repository configuration file that downloads content from your Satellite instance. Sample repository configuration [test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem Procedure Create a ConfigMap object containing the Satellite repository configuration file by entering the following command: USD oc create configmap yum-repos-d --from-file /path/to/satellite.repo Add the Satellite repository configuration and entitlement key as a build volumes: strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement 10.4.2. Docker builds using Red Hat Satellite subscriptions Docker strategy builds can use Red Hat Satellite repositories to install subscription content. Prerequisites You have added the entitlement keys and Satellite repository configurations as build volumes. Procedure Use the following example to create a Dockerfile for installing content with Satellite: FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \ 2 nss_wrapper \ uid_wrapper -y && \ yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 1 You must include the command to remove the /etc/rhsm-host directory and all its contents in your Dockerfile before executing any yum or dnf commands. 2 Contact your Satellite system administrator to find the correct repositories for the build's installed packages. 3 You must restore the /etc/rhsm-host symbolic link to keep your image compatible with other Red Hat container images. Additional resources How to use builds with Red Hat Satellite subscriptions and which certificate to use 10.5. Running builds using SharedSecret objects You can use a SharedSecret object to securely access the entitlement keys of a cluster in builds. The SharedSecret object allows you to share and synchronize secrets across namespaces. Important The Shared Resource CSI Driver feature is now generally available in Builds for Red Hat OpenShift 1.1 . This feature is now removed in OpenShift Container Platform 4.18 and later. To use this feature, ensure that you are using Builds for Red Hat OpenShift 1.1 or later. Prerequisites You have enabled the TechPreviewNoUpgrade feature set by using the feature gates. For more information, see Enabling features using feature gates . You must have permission to perform the following actions: Create build configs and start builds. Discover which SharedSecret CR instances are available by entering the oc get sharedsecrets command and getting a non-empty list back. Determine if the builder service account available to you in your namespace is allowed to use the given SharedSecret CR instance. In other words, you can run oc adm policy who-can use <identifier of specific SharedSecret> to see if the builder service account in your namespace is listed. Note If neither of the last two prerequisites in this list are met, establish, or ask someone to establish, the necessary role-based access control (RBAC) so that you can discover SharedSecret CR instances and enable service accounts to use SharedSecret CR instances. Procedure Use oc apply to create a SharedSecret object instance with the cluster's entitlement secret. Important You must have cluster administrator permissions to create SharedSecret objects. Example oc apply -f command with YAML Role object definition USD oc apply -f - <<EOF kind: SharedSecret apiVersion: sharedresource.openshift.io/v1alpha1 metadata: name: etc-pki-entitlement spec: secretRef: name: etc-pki-entitlement namespace: openshift-config-managed EOF Create a role to grant the builder service account permission to access the SharedSecret object: Example oc apply -f command USD oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: builder-etc-pki-entitlement namespace: build-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - etc-pki-entitlement verbs: - use EOF Create a RoleBinding object that grants the builder service account permission to access the SharedSecret object by running the following command: Example oc create rolebinding command USD oc create rolebinding builder-etc-pki-entitlement --role=builder-etc-pki-entitlement --serviceaccount=build-namespace:builder Add the entitlement secret to your BuildConfig object by using a CSI volume mount: Example YAML BuildConfig object definition apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: uid-wrapper-rhel9 namespace: build-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \ 2 nss_wrapper \ uid_wrapper -y && \ yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: "/etc/pki/entitlement" name: etc-pki-entitlement source: csi: driver: csi.sharedresource.openshift.io readOnly: true 4 volumeAttributes: sharedSecret: etc-pki-entitlement 5 type: CSI 1 You must include the command to remove the /etc/rhsm-host directory and all its contents in the Dockerfile before executing any yum or dnf commands. 2 Use the Red Hat Package Browser to find the correct repositories for your installed packages. 3 You must restore the /etc/rhsm-host symbolic link to keep your image compatible with other Red Hat container images. 4 You must set readOnly to true to mount the shared resource in the build. 5 Reference the name of the SharedSecret object to include it in the build. Start a build from the BuildConfig object and follow the logs using the oc command. USD oc start-build uid-wrapper-rhel9 -n build-namespace -F 10.6. Additional resources Importing simple content access certificates with Insights Operator Enabling features using feature gates Managing image streams Build strategies
|
[
"oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi9:latest -n openshift",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source",
"oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source",
"cat << EOF > secret-template.txt kind: Secret apiVersion: v1 metadata: name: etc-pki-entitlement type: Opaque data: {{ range \\USDkey, \\USDvalue := .data }} {{ \\USDkey }}: {{ \\USDvalue }} {{ end }} EOF oc get secret etc-pki-entitlement -n openshift-config-managed -o=go-template-file --template=secret-template.txt | oc apply -f -",
"strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement",
"FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3",
"[test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem",
"oc create configmap yum-repos-d --from-file /path/to/satellite.repo",
"strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement",
"FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3",
"oc apply -f - <<EOF kind: SharedSecret apiVersion: sharedresource.openshift.io/v1alpha1 metadata: name: etc-pki-entitlement spec: secretRef: name: etc-pki-entitlement namespace: openshift-config-managed EOF",
"oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: builder-etc-pki-entitlement namespace: build-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - etc-pki-entitlement verbs: - use EOF",
"oc create rolebinding builder-etc-pki-entitlement --role=builder-etc-pki-entitlement --serviceaccount=build-namespace:builder",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: uid-wrapper-rhel9 namespace: build-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: \"/etc/pki/entitlement\" name: etc-pki-entitlement source: csi: driver: csi.sharedresource.openshift.io readOnly: true 4 volumeAttributes: sharedSecret: etc-pki-entitlement 5 type: CSI",
"oc start-build uid-wrapper-rhel9 -n build-namespace -F"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/builds_using_buildconfig/running-entitled-builds
|
4.9. Additional Fencing Configuration Options
|
4.9. Additional Fencing Configuration Options Table 4.2, "Advanced Properties of Fencing Devices" . summarizes additional properties you can set for fencing devices. Note that these properties are for advanced use only. Table 4.2. Advanced Properties of Fencing Devices Field Type Default Description pcmk_host_argument string port An alternate parameter to supply instead of port. Some devices do not support the standard port parameter or may provide additional ones. Use this to specify an alternate, device-specific, parameter that should indicate the machine to be fenced. A value of none can be used to tell the cluster not to supply any additional parameters. pcmk_reboot_action string reboot An alternate command to run instead of reboot . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the reboot action. pcmk_reboot_timeout time 60s Specify an alternate timeout to use for reboot actions instead of stonith-timeout . Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for reboot actions. pcmk_reboot_retries integer 2 The maximum number of times to retry the reboot command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries reboot actions before giving up. pcmk_off_action string off An alternate command to run instead of off . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the off action. pcmk_off_timeout time 60s Specify an alternate timeout to use for off actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for off actions. pcmk_off_retries integer 2 The maximum number of times to retry the off command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries off actions before giving up. pcmk_list_action string list An alternate command to run instead of list . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the list action. pcmk_list_timeout time 60s Specify an alternate timeout to use for list actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for list actions. pcmk_list_retries integer 2 The maximum number of times to retry the list command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries list actions before giving up. pcmk_monitor_action string monitor An alternate command to run instead of monitor . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the monitor action. pcmk_monitor_timeout time 60s Specify an alternate timeout to use for monitor actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for monitor actions. pcmk_monitor_retries integer 2 The maximum number of times to retry the monitor command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries monitor actions before giving up. pcmk_status_action string status An alternate command to run instead of status . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the status action. pcmk_status_timeout time 60s Specify an alternate timeout to use for status actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for status actions. pcmk_status_retries integer 2 The maximum number of times to retry the status command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries status actions before giving up.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-fencedevicesadditional-HAAR
|
Securing Applications and Services Guide
|
Securing Applications and Services Guide Red Hat build of Keycloak 26.0 Red Hat Customer Content Services
|
[
"/realms/{realm-name}/.well-known/openid-configuration",
"/realms/{realm-name}/protocol/openid-connect/auth",
"/realms/{realm-name}/protocol/openid-connect/token",
"/realms/{realm-name}/protocol/openid-connect/userinfo",
"/realms/{realm-name}/protocol/openid-connect/logout",
"/realms/{realm-name}/protocol/openid-connect/certs",
"/realms/{realm-name}/protocol/openid-connect/token/introspect",
"/realms/{realm-name}/clients-registrations/openid-connect",
"/realms/{realm-name}/protocol/openid-connect/revoke",
"/realms/{realm-name}/protocol/openid-connect/auth/device",
"/realms/{realm-name}/protocol/openid-connect/ext/ciba/auth",
"curl -d \"client_id=myclient\" -d \"client_secret=40cc097b-2a57-4c17-b36a-8fdf3fc2d578\" -d \"username=user\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"",
"npm install keycloak-js",
"import Keycloak from 'keycloak-js'; const keycloak = new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); try { const authenticated = await keycloak.init(); if (authenticated) { console.log('User is authenticated'); } else { console.log('User is not authenticated'); } } catch (error) { console.error('Failed to initialize adapter:', error); }",
"await keycloak.init({ onLoad: 'check-sso', silentCheckSsoRedirectUri: `USD{location.origin}/silent-check-sso.html` });",
"<!doctype html> <html> <body> <script> parent.postMessage(location.href, location.origin); </script> </body> </html>",
"await keycloak.init({ onLoad: 'login-required' });",
"async function fetchUsers() { const response = await fetch('/api/users', { headers: { accept: 'application/json', authorization: `Bearer USD{keycloak.token}` } }); return response.json(); }",
"try { await keycloak.updateToken(30); } catch (error) { console.error('Failed to refresh token:', error); } const users = await fetchUsers();",
"await keycloak.init({ flow: 'implicit' })",
"await keycloak.init({ flow: 'hybrid' });",
"await keycloak.init({ adapter: 'cordova-native' });",
"<preference name=\"AndroidLaunchMode\" value=\"singleTask\" />",
"import Keycloak from 'keycloak-js'; import KeycloakCapacitorAdapter from 'keycloak-capacitor-adapter'; const keycloak = new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); await keycloak.init({ adapter: KeycloakCapacitorAdapter, });",
"import Keycloak, { KeycloakAdapter } from 'keycloak-js'; // Implement the 'KeycloakAdapter' interface so that all required methods are guaranteed to be present. const MyCustomAdapter: KeycloakAdapter = { async login(options) { // Write your own implementation here. } // The other methods go here }; const keycloak = new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); await keycloak.init({ adapter: MyCustomAdapter, });",
"// Recommended way to initialize the adapter. new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); // Alternatively a string to the path of the `keycloak.json` file. // Has some performance implications, as it will load the keycloak.json file from the server. // This version might also change in the future and is therefore not recommended. new Keycloak(\"http://keycloak-server/keycloak.json\");",
"try { const profile = await keycloak.loadUserProfile(); console.log('Retrieved user profile:', profile); } catch (error) { console.error('Failed to load user profile:', error); }",
"try { const refreshed = await keycloak.updateToken(5); console.log(refreshed ? 'Token was refreshed' : 'Token is still valid'); } catch (error) { console.error('Failed to refresh the token:', error); }",
"keycloak.onAuthSuccess = () => console.log('Authenticated!');",
"mkdir myapp && cd myapp",
"\"dependencies\": { \"keycloak-connect\": \"file:keycloak-connect-26.0.10.tgz\" }",
"const session = require('express-session'); const Keycloak = require('keycloak-connect'); const memoryStore = new session.MemoryStore(); const keycloak = new Keycloak({ store: memoryStore });",
"npm install express-session",
"\"scripts\": { \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\", \"start\": \"node server.js\" },",
"npm run start",
"const kcConfig = { clientId: 'myclient', bearerOnly: true, serverUrl: 'http://localhost:8080', realm: 'myrealm', realmPublicKey: 'MIIBIjANB...' }; const keycloak = new Keycloak({ store: memoryStore }, kcConfig);",
"const keycloak = new Keycloak({ store: memoryStore, idpHint: myIdP }, kcConfig);",
"const session = require('express-session'); const memoryStore = new session.MemoryStore(); // Configure session app.use( session({ secret: 'mySecret', resave: false, saveUninitialized: true, store: memoryStore, }) ); const keycloak = new Keycloak({ store: memoryStore });",
"const keycloak = new Keycloak({ scope: 'offline_access' });",
"npm install express",
"const express = require('express'); const app = express();",
"app.use( keycloak.middleware() );",
"app.listen(3000, function () { console.log('App listening on port 3000'); });",
"const app = express(); app.set( 'trust proxy', true ); app.use( keycloak.middleware() );",
"app.get( '/complain', keycloak.protect(), complaintHandler );",
"app.get( '/special', keycloak.protect('special'), specialHandler );",
"app.get( '/extra-special', keycloak.protect('other-app:special'), extraSpecialHandler );",
"app.get( '/admin', keycloak.protect( 'realm:admin' ), adminHandler );",
"app.get('/apis/me', keycloak.enforcer('user:profile'), userProfileHandler);",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), userProfileHandler);",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), function (req, res) { const token = req.kauth.grant.access_token.content; const permissions = token.authorization ? token.authorization.permissions : undefined; // show user profile });",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'permissions'}), function (req, res) { const permissions = req.permissions; // show user profile });",
"keycloak.enforcer('user:profile', {resource_server_id: 'my-apiserver'})",
"app.get('/protected/resource', keycloak.enforcer(['resource:view', 'resource:write'], { claims: function(request) { return { \"http.uri\": [\"/protected/resource\"], \"user.agent\": // get user agent from request } } }), function (req, res) { // access granted",
"function protectBySection(token, request) { return token.hasRole( request.params.section ); } app.get( '/:section/:page', keycloak.protect( protectBySection ), sectionHandler );",
"Keycloak.prototype.redirectToLogin = function(req) { const apiReqMatcher = /\\/api\\//i; return !apiReqMatcher.test(req.originalUrl || req.url); };",
"app.use( keycloak.middleware( { logout: '/logoff' } ));",
"https://example.com/logoff?redirect_url=https%3A%2F%2Fexample.com%3A3000%2Flogged%2Fout",
"app.use( keycloak.middleware( { admin: '/callbacks' } );",
"LoadModule auth_openidc_module modules/mod_auth_openidc.so ServerName USD{HOSTIP} <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/html #this is required by mod_auth_openidc OIDCCryptoPassphrase a-random-secret-used-by-apache-oidc-and-balancer OIDCProviderMetadataURL USD{KC_ADDR}/realms/USD{KC_REALM}/.well-known/openid-configuration OIDCClientID USD{CLIENT_ID} OIDCClientSecret USD{CLIENT_SECRET} OIDCRedirectURI http://USD{HOSTIP}/USD{CLIENT_APP_NAME}/redirect_uri # maps the preferred_username claim to the REMOTE_USER environment variable OIDCRemoteUserClaim preferred_username <Location /USD{CLIENT_APP_NAME}/> AuthType openid-connect Require valid-user </Location> </VirtualHost>",
"<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>5.0.0.Final</version> <configuration> <feature-packs> <feature-pack> <location>wildfly@maven(org.jboss.universe:community-universe)#32.0.1.Final</location> </feature-pack> <feature-pack> <groupId>org.keycloak</groupId> <artifactId>keycloak-saml-adapter-galleon-pack</artifactId> <version>26.0.10</version> </feature-pack> </feature-packs> <layers> <layer>core-server</layer> <layer>web-server</layer> <layer>jaxrs-server</layer> <layer>datasources-web-server</layer> <layer>webservices</layer> <layer>keycloak-saml</layer> <layer>keycloak-client-saml</layer> <layer>keycloak-client-saml-ejb</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin>",
"<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>11.0.2.Final</version> <configuration> <feature-packs> <feature-pack> <location>wildfly@maven(org.jboss.universe:community-universe)#32.0.1.Final</location> </feature-pack> <feature-pack> <groupId>org.keycloak</groupId> <artifactId>keycloak-saml-adapter-galleon-pack</artifactId> <version>26.0.10</version> </feature-pack> </feature-packs> <layers> <layer>core-server</layer> <layer>web-server</layer> <layer>jaxrs-server</layer> <layer>datasources-web-server</layer> <layer>webservices</layer> <layer>keycloak-saml</layer> <layer>keycloak-client-saml</layer> <layer>keycloak-client-saml-ejb</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin>",
"<plugin> <groupId>org.jboss.eap.plugins</groupId> <artifactId>eap-maven-plugin</artifactId> <version>1.0.0.Final-redhat-00014</version> <configuration> <channels> <channel> <manifest> <groupId>org.jboss.eap.channels</groupId> <artifactId>eap-8.0</artifactId> </manifest> </channel> </channels> <feature-packs> <feature-pack> <location>org.keycloak:keycloak-saml-adapter-galleon-pack</location> </feature-pack> </feature-packs> <layers> <layer>core-server</layer> <layer>web-server</layer> <layer>jaxrs-server</layer> <layer>datasources-web-server</layer> <layer>webservices</layer> <layer>keycloak-saml</layer> <layer>keycloak-client-saml</layer> <layer>keycloak-client-saml-ejb</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin>",
"<keycloak-saml-adapter xmlns=\"urn:keycloak:saml:adapter\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:keycloak:saml:adapter https://www.keycloak.org/schema/keycloak_saml_adapter_1_10.xsd\"> <SP entityID=\"http://localhost:8081/sales-post-sig/\" sslPolicy=\"EXTERNAL\" nameIDPolicyFormat=\"urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified\" logoutPage=\"/logout.jsp\" forceAuthentication=\"false\" isPassive=\"false\" turnOffChangeSessionIdOnLogin=\"false\" autodetectBearerOnly=\"false\"> <Keys> <Key signing=\"true\" > <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <PrivateKey alias=\"http://localhost:8080/sales-post-sig/\" password=\"test123\"/> <Certificate alias=\"http://localhost:8080/sales-post-sig/\"/> </KeyStore> </Key> </Keys> <PrincipalNameMapping policy=\"FROM_NAME_ID\"/> <RoleIdentifiers> <Attribute name=\"Role\"/> </RoleIdentifiers> <RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.resource.location\" value=\"/WEB-INF/role-mappings.properties\"/> </RoleMappingsProvider> <IDP entityID=\"idp\" signaturesRequired=\"true\"> <SingleSignOnService requestBinding=\"POST\" bindingUrl=\"http://localhost:8081/realms/demo/protocol/saml\" /> <SingleLogoutService requestBinding=\"POST\" responseBinding=\"POST\" postBindingUrl=\"http://localhost:8081/realms/demo/protocol/saml\" redirectBindingUrl=\"http://localhost:8081/realms/demo/protocol/saml\" /> <Keys> <Key signing=\"true\"> <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <Certificate alias=\"demo\"/> </KeyStore> </Key> </Keys> </IDP> </SP> </keycloak-saml-adapter>",
"<web-app xmlns=\"https://jakarta.ee/xml/ns/jakartaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"https://jakarta.ee/xml/ns/jakartaee https://jakarta.ee/xml/ns/jakartaee/web-app_6_0.xsd\" version=\"6.0\"> <module-name>customer-portal</module-name> <security-constraint> <web-resource-collection> <web-resource-name>Admins</web-resource-name> <url-pattern>/admin/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>Customers</web-resource-name> <url-pattern>/customers/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK-SAML</auth-method> <realm-name>this is ignored currently</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> <security-role> <role-name>user</role-name> </security-role> </web-app>",
"<extensions> <extension module=\"org.keycloak.keycloak-saml-adapter-subsystem\"/> </extensions> <profile> <subsystem xmlns=\"urn:jboss:domain:keycloak-saml:1.1\"> <secure-deployment name=\"WAR MODULE NAME.war\"> <SP entityID=\"APPLICATION URL\"> </SP> </secure-deployment> </subsystem> </profile>",
"<subsystem xmlns=\"urn:jboss:domain:keycloak-saml:1.1\"> <secure-deployment name=\"saml-post-encryption.war\"> <SP entityID=\"http://localhost:8080/sales-post-enc/\" sslPolicy=\"EXTERNAL\" nameIDPolicyFormat=\"urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified\" logoutPage=\"/logout.jsp\" forceAuthentication=\"false\"> <Keys> <Key signing=\"true\" encryption=\"true\"> <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <PrivateKey alias=\"http://localhost:8080/sales-post-enc/\" password=\"test123\"/> <Certificate alias=\"http://localhost:8080/sales-post-enc/\"/> </KeyStore> </Key> </Keys> <PrincipalNameMapping policy=\"FROM_NAME_ID\"/> <RoleIdentifiers> <Attribute name=\"Role\"/> </RoleIdentifiers> <IDP entityID=\"idp\"> <SingleSignOnService signRequest=\"true\" validateResponseSignature=\"true\" requestBinding=\"POST\" bindingUrl=\"http://localhost:8080/realms/saml-demo/protocol/saml\"/> <SingleLogoutService validateRequestSignature=\"true\" validateResponseSignature=\"true\" signRequest=\"true\" signResponse=\"true\" requestBinding=\"POST\" responseBinding=\"POST\" postBindingUrl=\"http://localhost:8080/realms/saml-demo/protocol/saml\" redirectBindingUrl=\"http://localhost:8080/realms/saml-demo/protocol/saml\"/> <Keys> <Key signing=\"true\" > <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <Certificate alias=\"saml-demo\"/> </KeyStore> </Key> </Keys> </IDP> </SP> </secure-deployment> </subsystem>",
"samesite-cookie(mode=None, cookie-pattern=JSESSIONID)",
"<context-param> <param-name>keycloak.sessionIdMapperUpdater.classes</param-name> <param-value>org.keycloak.adapters.saml.wildfly.infinispan.InfinispanSessionCacheIdMapperUpdater</param-value> </context-param>",
"package org.keycloak.adapters.saml; public class SamlPrincipal implements Serializable, Principal { /** * Get full saml assertion * * @return */ public AssertionType getAssertion() { } /** * Get SAML subject sent in assertion * * @return */ public String getSamlSubject() { } /** * Subject nameID format * * @return */ public String getNameIDFormat() { } @Override public String getName() { } /** * Convenience function that gets Attribute value by attribute name * * @param name * @return */ public List<String> getAttributes(String name) { } /** * Convenience function that gets Attribute value by attribute friendly name * * @param friendlyName * @return */ public List<String> getFriendlyAttributes(String friendlyName) { } /** * Convenience function that gets first value of an attribute by attribute name * * @param name * @return */ public String getAttribute(String name) { } /** * Convenience function that gets first value of an attribute by attribute name * * * @param friendlyName * @return */ public String getFriendlyAttribute(String friendlyName) { } /** * Get set of all assertion attribute names * * @return */ public Set<String> getAttributeNames() { } /** * Get set of all assertion friendly attribute names * * @return */ public Set<String> getFriendlyNames() { } }",
"<error-page> <error-code>403</error-code> <location>/ErrorHandler</location> </error-page>",
"public class SamlAuthenticationError implements AuthenticationError { public static enum Reason { EXTRACTION_FAILURE, INVALID_SIGNATURE, ERROR_STATUS } public Reason getReason() { return reason; } public StatusResponseType getStatus() { return status; } }",
"package example; import java.io.InputStream; import org.keycloak.adapters.saml.SamlConfigResolver; import org.keycloak.adapters.saml.SamlDeployment; import org.keycloak.adapters.saml.config.parsers.DeploymentBuilder; import org.keycloak.adapters.saml.config.parsers.ResourceLoader; import org.keycloak.adapters.spi.HttpFacade; import org.keycloak.saml.common.exceptions.ParsingException; public class SamlMultiTenantResolver implements SamlConfigResolver { @Override public SamlDeployment resolve(HttpFacade.Request request) { String host = request.getHeader(\"Host\"); String realm = null; if (host.contains(\"tenant1\")) { realm = \"tenant1\"; } else if (host.contains(\"tenant2\")) { realm = \"tenant2\"; } else { throw new IllegalStateException(\"Not able to guess the keycloak-saml.xml to load\"); } InputStream is = getClass().getResourceAsStream(\"/\" + realm + \"-keycloak-saml.xml\"); if (is == null) { throw new IllegalStateException(\"Not able to find the file /\" + realm + \"-keycloak-saml.xml\"); } ResourceLoader loader = new ResourceLoader() { @Override public InputStream getResourceAsStream(String path) { return getClass().getResourceAsStream(path); } }; try { return new DeploymentBuilder().build(is, loader); } catch (ParsingException e) { throw new IllegalStateException(\"Cannot load SAML deployment\", e); } } }",
"<web-app> <context-param> <param-name>keycloak.config.resolver</param-name> <param-value>example.SamlMultiTenantResolver</param-value> </context-param> </web-app>",
"<samlp:Status> <samlp:StatusCode Value=\"urn:oasis:names:tc:SAML:2.0:status:Responder\"> <samlp:StatusCode Value=\"urn:oasis:names:tc:SAML:2.0:status:AuthnFailed\"/> </samlp:StatusCode> <samlp:StatusMessage>authentication_expired</samlp:StatusMessage> </samlp:Status>",
"<SP entityID=\"sp\" sslPolicy=\"ssl\" nameIDPolicyFormat=\"format\" forceAuthentication=\"true\" isPassive=\"false\" keepDOMAssertion=\"true\" autodetectBearerOnly=\"false\"> </SP>",
"<Keys> <Key signing=\"true\" > </Key> </Keys>",
"<Keys> <Key signing=\"true\" > <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <PrivateKey alias=\"myPrivate\" password=\"test123\"/> <Certificate alias=\"myCertAlias\"/> </KeyStore> </Key> </Keys>",
"<Keys> <Key signing=\"true\"> <PrivateKeyPem> 2341251234AB31234==231BB998311222423522334 </PrivateKeyPem> <CertificatePem> 211111341251234AB31234==231BB998311222423522334 </CertificatePem> </Key> </Keys>",
"<SP ...> <PrincipalNameMapping policy=\"FROM_NAME_ID\"/> </SP> <SP ...> <PrincipalNameMapping policy=\"FROM_ATTRIBUTE\" attribute=\"email\" /> </SP>",
"<RoleIdentifiers> <Attribute name=\"Role\"/> <Attribute name=\"member\"/> <Attribute name=\"memberOf\"/> </RoleIdentifiers>",
"<RoleIdentifiers> </RoleIdentifiers> <RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.resource.location\" value=\"/WEB-INF/role-mappings.properties\"/> </RoleMappingsProvider> <IDP> </IDP>",
"<RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.file.location\" value=\"/opt/mappers/roles.properties\"/> </RoleMappingsProvider>",
"<RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.resource.location\" value=\"/WEB-INF/conf/roles.properties\"/> </RoleMappingsProvider>",
"roleA=roleX,roleY roleB= kc_user=roleZ",
"role\\u0020A=roleX,roleY",
"<IDP entityID=\"idp\" signaturesRequired=\"true\" signatureAlgorithm=\"RSA_SHA1\" signatureCanonicalizationMethod=\"http://www.w3.org/2001/10/xml-exc-c14n#\"> </IDP>",
"<AllowedClockSkew unit=\"MILLISECONDS\">3500</AllowedClockSkew>",
"<SingleSignOnService signRequest=\"true\" validateResponseSignature=\"true\" requestBinding=\"post\" bindingUrl=\"url\"/>",
"<SingleLogoutService validateRequestSignature=\"true\" validateResponseSignature=\"true\" signRequest=\"true\" signResponse=\"true\" requestBinding=\"redirect\" responseBinding=\"post\" postBindingUrl=\"posturl\" redirectBindingUrl=\"redirecturl\">",
"<IDP entityID=\"idp\"> <Keys> <Key signing=\"true\"> <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <Certificate alias=\"demo\"/> </KeyStore> </Key> </Keys> </IDP>",
"<HttpClient connectionPoolSize=\"10\" disableTrustManager=\"false\" allowAnyHostname=\"false\" clientKeystore=\"classpath:keystore.jks\" clientKeystorePassword=\"pwd\" truststore=\"classpath:truststore.jks\" truststorePassword=\"pwd\" proxyUrl=\"http://proxy/\" socketTimeout=\"5000\" connectionTimeout=\"6000\" connectionTtl=\"500\" />",
"install httpd mod_auth_mellon mod_ssl openssl",
"mkdir /etc/httpd/saml2",
"<Location / > MellonEnable info MellonEndpointPath /mellon/ MellonSPMetadataFile /etc/httpd/saml2/mellon_metadata.xml MellonSPPrivateKeyFile /etc/httpd/saml2/mellon.key MellonSPCertFile /etc/httpd/saml2/mellon.crt MellonIdPMetadataFile /etc/httpd/saml2/idp_metadata.xml </Location> <Location /private > AuthType Mellon MellonEnable auth Require valid-user </Location>",
"MellonSecureCookie On MellonCookieSameSite none",
"fqdn=`hostname` mellon_endpoint_url=\"https://USD{fqdn}/mellon\" mellon_entity_id=\"USD{mellon_endpoint_url}/metadata\" file_prefix=\"USD(echo \"USDmellon_entity_id\" | sed 's/[^A-Za-z.]/_/g' | sed 's/__*/_/g')\"",
"/usr/libexec/mod_auth_mellon/mellon_create_metadata.sh USDmellon_entity_id USDmellon_endpoint_url",
"mv USD{file_prefix}.cert /etc/httpd/saml2/mellon.crt mv USD{file_prefix}.key /etc/httpd/saml2/mellon.key mv USD{file_prefix}.xml /etc/httpd/saml2/mellon_metadata.xml",
"curl -k -o /etc/httpd/saml2/idp_metadata.xml https://USDidp_host/realms/test_realm/protocol/saml/descriptor",
"apachectl configtest",
"systemctl restart httpd.service",
"auth: token: realm: http://localhost:8080/realms/master/protocol/docker-v2/auth service: docker-test issuer: http://localhost:8080/realms/master",
"REGISTRY_AUTH_TOKEN_REALM: http://localhost:8080/realms/master/protocol/docker-v2/auth REGISTRY_AUTH_TOKEN_SERVICE: docker-test REGISTRY_AUTH_TOKEN_ISSUER: http://localhost:8080/realms/master",
"docker login localhost:5000 -u USDusername Password: ******* Login Succeeded",
"Authorization: bearer eyJhbGciOiJSUz",
"Authorization: basic BASE64(client-id + ':' + client-secret)",
"curl -X POST -d '{ \"clientId\": \"myclient\" }' -H \"Content-Type:application/json\" -H \"Authorization: bearer eyJhbGciOiJSUz...\" http://localhost:8080/realms/master/clients-registrations/default",
"String token = \"eyJhbGciOiJSUz...\"; ClientRepresentation client = new ClientRepresentation(); client.setClientId(CLIENT_ID); ClientRegistration reg = ClientRegistration.create() .url(\"http://localhost:8080\", \"myrealm\") .build(); reg.auth(Auth.token(token)); client = reg.create(client); String registrationAccessToken = client.getRegistrationAccessToken();",
"export PATH=USDPATH:USDKEYCLOAK_HOME/bin kcreg.sh",
"c:\\> set PATH=%PATH%;%KEYCLOAK_HOME%\\bin c:\\> kcreg",
"kcreg.sh config credentials --server http://localhost:8080 --realm demo --user user --client reg-cli kcreg.sh create -s clientId=my_client -s 'redirectUris=[\"http://localhost:8980/myapp/*\"]' kcreg.sh get my_client",
"c:\\> kcreg config credentials --server http://localhost:8080 --realm demo --user user --client reg-cli c:\\> kcreg create -s clientId=my_client -s \"redirectUris=[\\\"http://localhost:8980/myapp/*\\\"]\" c:\\> kcreg get my_client",
"kcreg.sh config truststore --trustpass USDPASSWORD ~/.keycloak/truststore.jks",
"c:\\> kcreg config truststore --trustpass %PASSWORD% %HOMEPATH%\\.keycloak\\truststore.jks",
"kcreg.sh help",
"c:\\> kcreg help",
"kcreg.sh config initial-token USDTOKEN kcreg.sh create -s clientId=myclient",
"kcreg.sh create -s clientId=myclient -t USDTOKEN",
"c:\\> kcreg config initial-token %TOKEN% c:\\> kcreg create -s clientId=myclient",
"c:\\> kcreg create -s clientId=myclient -t %TOKEN%",
"kcreg.sh create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s 'redirectUris=[\"/myclient/*\"]' -o",
"C:\\> kcreg create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s \"redirectUris=[\\\"/myclient/*\\\"]\" -o",
"kcreg.sh get myclient",
"C:\\> kcreg get myclient",
"kcreg.sh get myclient -e install > keycloak.json",
"C:\\> kcreg get myclient -e install > keycloak.json",
"kcreg.sh get myclient > myclient.json vi myclient.json kcreg.sh update myclient -f myclient.json",
"C:\\> kcreg get myclient > myclient.json C:\\> notepad myclient.json C:\\> kcreg update myclient -f myclient.json",
"kcreg.sh update myclient -s enabled=false -d redirectUris",
"C:\\> kcreg update myclient -s enabled=false -d redirectUris",
"kcreg.sh update myclient --merge -d redirectUris -f mychanges.json",
"C:\\> kcreg update myclient --merge -d redirectUris -f mychanges.json",
"kcreg.sh delete myclient",
"C:\\> kcreg delete myclient",
"/realms/{realm}/protocol/openid-connect/token",
"{ \"access_token\" : \".....\", \"refresh_token\" : \".....\", \"expires_in\" : \"....\" }",
"{ \"error\" : \"....\" \"error_description\" : \"....\" }",
"curl -X POST -d \"client_id=starting-client\" -d \"client_secret=the client secret\" --data-urlencode \"grant_type=urn:ietf:params:oauth:grant-type:token-exchange\" -d \"subject_token=....\" --data-urlencode \"requested_token_type=urn:ietf:params:oauth:token-type:refresh_token\" -d \"audience=target-client\" http://localhost:8080/realms/myrealm/protocol/openid-connect/token",
"{ \"access_token\" : \"....\", \"refresh_token\" : \"....\", \"expires_in\" : 3600 }",
"curl -X POST -d \"client_id=starting-client\" -d \"client_secret=the client secret\" --data-urlencode \"grant_type=urn:ietf:params:oauth:grant-type:token-exchange\" -d \"subject_token=....\" --data-urlencode \"requested_token_type=urn:ietf:params:oauth:token-type:access_token\" -d \"requested_issuer=google\" http://localhost:8080/realms/myrealm/protocol/openid-connect/token",
"{ \"access_token\" : \"....\", \"expires_in\" : 3600 \"account-link-url\" : \"https://....\" }",
"{ \"error\" : \"....\", \"error_description\" : \"...\" \"account-link-url\" : \"https://....\" }",
"curl -X POST -d \"client_id=starting-client\" -d \"client_secret=the client secret\" --data-urlencode \"grant_type=urn:ietf:params:oauth:grant-type:token-exchange\" -d \"subject_token=....\" -d \"subject_issuer=myOidcProvider\" --data-urlencode \"subject_token_type=urn:ietf:params:oauth:token-type:access_token\" -d \"audience=target-client\" http://localhost:8080/realms/myrealm/protocol/openid-connect/token",
"{ \"access_token\" : \"....\", \"refresh_token\" : \"....\", \"expires_in\" : 3600 }",
"curl -X POST -d \"client_id=starting-client\" -d \"client_secret=the client secret\" --data-urlencode \"grant_type=urn:ietf:params:oauth:grant-type:token-exchange\" -d \"subject_token=....\" --data-urlencode \"requested_token_type=urn:ietf:params:oauth:token-type:access_token\" -d \"audience=target-client\" -d \"requested_subject=wburke\" http://localhost:8080/realms/myrealm/protocol/openid-connect/token",
"curl -X POST -d \"client_id=starting-client\" -d \"client_secret=the client secret\" --data-urlencode \"grant_type=urn:ietf:params:oauth:grant-type:token-exchange\" -d \"requested_subject=wburke\" http://localhost:8080/realms/myrealm/protocol/openid-connect/token",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>999.0.0-SNAPSHOT</version> </dependency>",
"import org.keycloak.admin.client.Keycloak; import org.keycloak.representations.idm.RealmRepresentation; Keycloak keycloak = Keycloak.getInstance( \"http://localhost:8080\", \"master\", \"admin\", \"password\", \"admin-cli\"); RealmRepresentation realm = keycloak.realm(\"master\").toRepresentation();",
"<dependencies> <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-authz-client</artifactId> <version>999.0.0-SNAPSHOT</version> </dependency> </dependencies>",
"{ \"realm\": \"hello-world-authz\", \"auth-server-url\" : \"http://localhost:8080\", \"resource\" : \"hello-world-authz-service\", \"credentials\": { \"secret\": \"secret\" } }",
"// create a new instance based on the configuration defined in a keycloak.json located in your classpath AuthzClient authzClient = AuthzClient.create();",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // send the entitlement request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(request); String rpt = response.getToken(); System.out.println(\"You got an RPT: \" + rpt); // now you can use the RPT to access protected resources on the resource server",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // add permissions to the request based on the resources and scopes you want to check access request.addPermission(\"Default Resource\"); // send the entitlement request to the server in order to // obtain an RPT with permissions for a single resource AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(request); String rpt = response.getToken(); System.out.println(\"You got an RPT: \" + rpt); // now you can use the RPT to access protected resources on the resource server",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create a new resource representation with the information we want ResourceRepresentation newResource = new ResourceRepresentation(); newResource.setName(\"New Resource\"); newResource.setType(\"urn:hello-world-authz:resources:example\"); newResource.addScope(new ScopeRepresentation(\"urn:hello-world-authz:scopes:view\")); ProtectedResource resourceClient = authzClient.protection().resource(); ResourceRepresentation existingResource = resourceClient.findByName(newResource.getName()); if (existingResource != null) { resourceClient.delete(existingResource.getId()); } // create the resource on the server ResourceRepresentation response = resourceClient.create(newResource); String resourceId = response.getId(); // query the resource using its newly generated id ResourceRepresentation resource = resourceClient.findById(resourceId); System.out.println(resource);",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // send the authorization request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(); String rpt = response.getToken(); // introspect the token TokenIntrospectionResponse requestingPartyToken = authzClient.protection().introspectRequestingPartyToken(rpt); System.out.println(\"Token status is: \" + requestingPartyToken.getActive()); System.out.println(\"Permissions granted by the server: \"); for (Permission granted : requestingPartyToken.getPermissions()) { System.out.println(granted); }",
"\"credentials\": { \"secret\": \"19666a4f-32dd-4049-b082-684c74115f28\" }",
"\"credentials\": { \"jwt\": { \"client-keystore-file\": \"classpath:keystore-client.jks\", \"client-keystore-type\": \"JKS\", \"client-keystore-password\": \"storepass\", \"client-key-password\": \"keypass\", \"client-key-alias\": \"clientkey\", \"token-expiration\": 10 } }",
"\"credentials\": { \"secret-jwt\": { \"secret\": \"19666a4f-32dd-4049-b082-684c74115f28\", \"algorithm\": \"HS512\" } }",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-policy-enforcer</artifactId> <version>999.0.0-SNAPSHOT</version> </dependency>",
"{ \"enforcement-mode\" : \"ENFORCING\", \"paths\": [ { \"path\" : \"/users/*\", \"methods\" : [ { \"method\": \"GET\", \"scopes\" : [\"urn:app.com:scopes:view\"] }, { \"method\": \"POST\", \"scopes\" : [\"urn:app.com:scopes:create\"] } ] } ] }",
"{ \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"claims\": { \"claim-from-request-parameter\": \"{request.parameter['a']}\", \"claim-from-header\": \"{request.header['b']}\", \"claim-from-cookie\": \"{request.cookie['c']}\", \"claim-from-remoteAddr\": \"{request.remoteAddr}\", \"claim-from-method\": \"{request.method}\", \"claim-from-uri\": \"{request.uri}\", \"claim-from-relativePath\": \"{request.relativePath}\", \"claim-from-secure\": \"{request.secure}\", \"claim-from-json-body-object\": \"{request.body['/a/b/c']}\", \"claim-from-json-body-array\": \"{request.body['/d/1']}\", \"claim-from-body\": \"{request.body}\", \"claim-from-static-value\": \"static value\", \"claim-from-multiple-static-value\": [\"static\", \"value\"], \"param-replace-multiple-placeholder\": \"Test {keycloak.access_token['/custom_claim/0']} and {request.parameter['a']}\" } } } ] }",
"{ \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"http\": { \"claims\": { \"claim-a\": \"/a\", \"claim-d\": \"/d\", \"claim-d0\": \"/d/0\", \"claim-d-all\": [ \"/d/0\", \"/d/1\" ] }, \"url\": \"http://mycompany/claim-provider\", \"method\": \"POST\", \"headers\": { \"Content-Type\": \"application/x-www-form-urlencoded\", \"header-b\": [ \"header-b-value1\", \"header-b-value2\" ], \"Authorization\": \"Bearer {keycloak.access_token}\" }, \"parameters\": { \"param-a\": [ \"param-a-value1\", \"param-a-value2\" ], \"param-subject\": \"{keycloak.access_token['/sub']}\", \"param-user-name\": \"{keycloak.access_token['/preferred_username']}\", \"param-other-claims\": \"{keycloak.access_token['/custom_claim']}\" } } } } ] }",
"{ \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"claims\": { \"claim-from-static-value\": \"static value\", \"claim-from-multiple-static-value\": [\"static\", \"value\"] } } } ] }",
"public class MyClaimInformationPointProviderFactory implements ClaimInformationPointProviderFactory<MyClaimInformationPointProvider> { @Override public String getName() { return \"my-claims\"; } @Override public void init(PolicyEnforcer policyEnforcer) { } @Override public MyClaimInformationPointProvider create(Map<String, Object> config) { return new MyClaimInformationPointProvider(config); } }",
"public class MyClaimInformationPointProvider implements ClaimInformationPointProvider { private final Map<String, Object> config; public MyClaimInformationPointProvider(Map<String, Object> config) { this.config = config; } @Override public Map<String, List<String>> resolve(HttpFacade httpFacade) { Map<String, List<String>> claims = new HashMap<>(); // put whatever claim you want into the map return claims; } }",
"HttpServletRequest request = // obtain javax.servlet.http.HttpServletRequest AuthorizationContext authzContext = (AuthorizationContext) request.getAttribute(AuthorizationContext.class.getName());",
"if (authzContext.hasResourcePermission(\"Project Resource\")) { // user can access the Project Resource } if (authzContext.hasResourcePermission(\"Admin Resource\")) { // user can access administration resources } if (authzContext.hasScopePermission(\"urn:project.com:project:create\")) { // user can create new projects }",
"if (User.hasRole('user')) { // user can access the Project Resource } if (User.hasRole('admin')) { // user can access administration resources } if (User.hasRole('project-manager')) { // user can create new projects }",
"ClientAuthorizationContext clientContext = ClientAuthorizationContext.class.cast(authzContext); AuthzClient authzClient = clientContext.getClient();",
"{ \"truststore\": \"path_to_your_trust_store\", \"truststore-password\": \"trust_store_password\" }"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html-single/securing_applications_and_services_guide/policy-enforcer-
|
Chapter 14. maven
|
Chapter 14. maven 14.1. maven:http-proxy-list 14.1.1. Description Lists HTTP proxy configurations for Maven remote repositories 14.1.2. Syntax maven:http-proxy-list [options] 14.1.3. Options Name Description --help Display this help message -x, --show-passwords Do not hide passwords related to Maven encryption 14.2. maven:http-proxy 14.2.1. Description Manage HTTP proxy configuration for Maven remote repositories 14.2.2. Syntax maven:http-proxy [options] [hostPort] 14.2.3. Arguments Name Description hostPort host:port of HTTP proxy 14.2.4. Options Name Description -p, --password Password for remote repository (may be encrypted, see "maven:password -ep") --help Display this help message -f, --force Do not ask for confirmation -id Identifier of HTTP proxy --change Changes HTTP proxy configuration in Maven settings -n, --non-proxy-hosts Non-proxied hosts (in the format '192.168.*|localhost|... ') --remove Removes HTTP proxy configuration from Maven settings --add Adds HTTP proxy configuration to Maven settings -u, --username Username for remote repository -x, --show-passwords Do not hide passwords related to Maven encryption 14.3. maven:password 14.3.1. Description Manage passwords for remote repositories and proxies 14.3.2. Syntax maven:password [options] 14.3.3. Options Name Description -emp, --encrypt-master-password Encrypts master password used to encrypt/decrypt other passwords, see "mvn -emp" --help Display this help message -ep, --encrypt-password Encrypts passwords to use for remote repositories and proxies, see "mvn -ep" -p, --persist 14.4. maven:repository-add 14.4.1. Description Adds Maven repository 14.4.2. Syntax maven:repository-add [options] [uri] 14.4.3. Arguments Name Description uri Repository URI. It may be file:// based, http(s):// based, may use other known protocol or even property placeholders (like USD{karaf.base}) 14.4.4. Options Name Description -nr, --no-releases Disable release handling in this repository -p, --password Password for remote repository (may be encrypted, see "maven:password -ep") --help Display this help message -f, --force Do not ask for confirmation -id Identifier of repository -idx Index at which new repository is to be inserted (0-based) (defaults to last - repository will be appended) -d, --default Edit default repository instead of remote one -s, --snapshots Enable SNAPSHOT handling in the repository -cp, --checksum-policy Checksum policy for repository (ignore, warn (default), fail) -u, --username Username for remote repository -x, --show-passwords Do not hide passwords related to Maven encryption -up, --update-policy Update policy for repository (never, daily (default), interval:N, always) 14.5. maven:repository-change 14.5.1. Description Changes configuration of Maven repository 14.5.2. Syntax maven:repository-change [options] [uri] 14.5.3. Arguments Name Description uri Repository URI. It may be file:// based, http(s):// based, may use other known protocol or even property placeholders (like USD{karaf.base}) 14.5.4. Options Name Description -nr, --no-releases Disable release handling in this repository -p, --password Password for remote repository (may be encrypted, see "maven:password -ep") --help Display this help message -f, --force Do not ask for confirmation -id Identifier of repository -d, --default Edit default repository instead of remote one -s, --snapshots Enable SNAPSHOT handling in the repository -cp, --checksum-policy Checksum policy for repository (ignore, warn (default), fail) -u, --username Username for remote repository -x, --show-passwords Do not hide passwords related to Maven encryption -up, --update-policy Update policy for repository (never, daily (default), interval:N, always) 14.6. maven:repository-list 14.6.1. Description Maven repository summary. 14.6.2. Syntax maven:repository-list [options] 14.6.3. Options Name Description --help Display this help message -v, --verbose Show additional information (policies, source) -x, --show-passwords Do not hide passwords related to Maven encryption 14.7. maven:repository-remove 14.7.1. Description Removes Maven repository 14.7.2. Syntax maven:repository-remove [options] 14.7.3. Options Name Description --help Display this help message -f, --force Do not ask for confirmation -id Identifier of repository -d, --default Edit default repository instead of remote one -x, --show-passwords Do not hide passwords related to Maven encryption 14.8. maven:summary 14.8.1. Description Maven configuration summary. 14.8.2. Syntax maven:summary [options] 14.8.3. Options Name Description --help Display this help message -p, --property-ids Use PID property identifiers instead of their names -s, --source Adds information about where the value is configured -d, --description Adds description of Maven configuration options -x, --show-passwords Do not hide passwords related to Maven encryption
| null |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/maven
|
Chapter 3. Installing a cluster with z/VM on IBM Z and IBM LinuxONE in a restricted network
|
Chapter 3. Installing a cluster with z/VM on IBM Z and IBM LinuxONE in a restricted network In OpenShift Container Platform version 4.16, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted network. Note While this document refers to only IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry for installation in a restricted network and obtained the imageContentSources data for your version of OpenShift Container Platform. Before you begin the installation process, you must move or remove any existing installation files. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 3.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 3.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 3.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 3.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 3.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 3.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.4.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.16 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One instance of z/VM 7.2 or later On your z/VM instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z(R) under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSWITCH in layer 2 Ethernet mode set up. Disk storage FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 3.4.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets that are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Two or three instances of z/VM 7.2 or later for high availability On your z/VM instances, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, one per z/VM instance. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the z/VM instances. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command SET SHARE . Do the same for infrastructure nodes, if they exist. See SET SHARE (IBM(R) Documentation). IBM Z network connectivity requirements To install on IBM Z(R) under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSWITCH in layer 2 Ethernet mode set up. Disk storage FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 3.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Bridging a HiperSockets LAN with a z/VM Virtual Switch in IBM(R) Documentation. See Scaling HyperPAV alias devices on Linux guests on z/VM for performance optimization. See Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 3.4.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 3.4.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.4.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 3.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 3.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 3.4.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 3.4.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 3.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 3.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 3.4.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.4.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 3.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 3.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 3.8.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . 3.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.8.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 3.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 3.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 3.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 3.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 3.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 3.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 3.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 3.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 3.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 3.11. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.16.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 4 zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . 2 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 For installations on DASD-type disks, replace with rd.dasd=0.0.xxxx to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 3.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on z/VM guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS z/VM guest virtual machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. If you want to enable secure boot, you have obtained the appropriate Red Hat Product Signing Key and read Secure boot on IBM Z and IBM LinuxONE in IBM documentation. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. Optional: To enable secure boot, add coreos.inst.secure_ipl For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 coreos.inst.secure_ipl \ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . 2 Specify the location of the Ignition config file. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 Optional: To enable secure boot, add coreos.inst.secure_ipl . Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a compute node with multipathing: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 \ zfcp.allow_lun_scan=0 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to z/VM, for example with FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Booting the installation on IBM Z(R) to install RHEL in z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. See PUNCH in IBM Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader: See IPL in IBM Documentation. Repeat this procedure for the other machines in the cluster. 3.12.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 3.12.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 3.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 3.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 3.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Configure the Operators that are not available. 3.16.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 3.16.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.16.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 3.16.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. Verification If you have enabled secure boot during the OpenShift Container Platform bootstrap process, the following verification steps are required: Debug the node by running the following command: USD oc debug node/<node_name> chroot /host Confirm that secure boot is enabled by running the following command: USD cat /sys/firmware/ipl/secure Example output 1 1 1 The value is 1 if secure boot is enabled and 0 if secure boot is not enabled. Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH . 3.18. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
|
[
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.16.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ipl c",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_z_and_ibm_linuxone/installing-restricted-networks-ibm-z
|
probe::irq_handler.entry
|
probe::irq_handler.entry Name probe::irq_handler.entry - Execution of interrupt handler starting Synopsis irq_handler.entry Values next_irqaction pointer to irqaction for shared interrupts thread_fn interrupt handler function for threaded interrupts thread thread pointer for threaded interrupts thread_flags Flags related to thread irq irq number flags_str symbolic string representation of IRQ flags dev_name name of device action struct irqaction* for this interrupt num dir pointer to the proc/irq/NN/name entry flags Flags for IRQ handler dev_id Cookie to identify device handler interrupt handler function
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-irq-handler-entry
|
Chapter 12. Nested Virtualization
|
Chapter 12. Nested Virtualization 12.1. Overview As of Red Hat Enterprise Linux 7.5, nested virtualization is available as a Technology Preview for KVM guest virtual machines. With this feature, a guest virtual machine (also referred to as level 1 or L1 ) that runs on a physical host ( level 0 or L0 ) can act as a hypervisor, and create its own guest virtual machines ( L2 ). Nested virtualization is useful in a variety of scenarios, such as debugging hypervisors in a constrained environment and testing larger virtual deployments on a limited amount of physical resources. However, note that nested virtualization is not supported or recommended in production user environments, and is primarily intended for development and testing. Nested virtualization relies on host virtualization extensions to function, and it should not be confused with running guests in a virtual environment using the QEMU Tiny Code Generator (TCG) emulation, which is not supported in Red Hat Enterprise Linux.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/nested_virt
|
Chapter 122. Telegram
|
Chapter 122. Telegram Both producer and consumer are supported The Telegram component provides access to the Telegram Bot API . It allows a Camel-based application to send and receive messages by acting as a Bot, participating in direct conversations with normal users, private and public groups or channels. A Telegram Bot must be created before using this component, following the instructions at the Telegram Bot developers home. When a new Bot is created, the BotFather provides an authorization token corresponding to the Bot. The authorization token is a mandatory parameter for the camel-telegram endpoint. Note In order to allow the Bot to receive all messages exchanged within a group or channel (not just the ones starting with a '/' character), ask the BotFather to disable the privacy mode , using the /setprivacy command. 122.1. Dependencies When using telegram with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-telegram-starter</artifactId> </dependency> 122.2. URI format 122.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 122.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 122.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 122.4. Component Options The Telegram component supports 7 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean baseUri (advanced) Can be used to set an alternative base URI, e.g. when you want to test the component against a mock Telegram API. String client (advanced) To use a custom AsyncHttpClient. AsyncHttpClient clientConfig (advanced) To configure the AsyncHttpClient to use a custom com.ning.http.client.AsyncHttpClientConfig instance. AsyncHttpClientConfig authorizationToken (security) The default Telegram authorization token to be used when the information is not provided in the endpoints. String 122.5. Endpoint Options The Telegram endpoint is configured using URI syntax: with the following path and query parameters: 122.5.1. Path Parameters (1 parameters) Name Description Default Type type (common) Required The endpoint type. Currently, only the 'bots' type is supported. Enum values: bots String 122.5.2. Query Parameters (30 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean limit (consumer) Limit on the number of updates that can be received in a single polling request. 100 Integer sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean timeout (consumer) Timeout in seconds for long polling. Put 0 for short polling or a bigger number for long polling. Long polling produces shorter response time. 30 Integer exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy chatId (producer) The identifier of the chat that will receive the produced messages. Chat ids can be first obtained from incoming messages (eg. when a telegram user starts a conversation with a bot, its client sends automatically a '/start' message containing the chat id). It is an optional parameter, as the chat id can be set dynamically for each outgoing message (using body or headers). String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean baseUri (advanced) Can be used to set an alternative base URI, e.g. when you want to test the component against a mock Telegram API. String bufferSize (advanced) The initial in-memory buffer size used when transferring data between Camel and AHC Client. 4096 int clientConfig (advanced) To configure the AsyncHttpClient to use a custom com.ning.http.client.AsyncHttpClientConfig instance. AsyncHttpClientConfig proxyHost (proxy) HTTP proxy host which could be used when sending out the message. String proxyPort (proxy) HTTP proxy port which could be used when sending out the message. Integer proxyType (proxy) HTTP proxy type which could be used when sending out the message. Enum values: HTTP SOCKS4 SOCKS5 HTTP TelegramProxyType backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean authorizationToken (security) Required The authorization token for using the bot (ask the BotFather). String 122.5.3. Message Headers Name Description CamelTelegramChatId This header is used by the producer endpoint in order to resolve the chat id that will receive the message. The recipient chat id can be placed (in order of priority) in message body, in the CamelTelegramChatId header or in the endpoint configuration ( chatId option). This header is also present in all incoming messages. CamelTelegramMediaType This header is used to identify the media type when the outgoing message is composed of pure binary data. Possible values are strings or enum values belonging to the org.apache.camel.component.telegram.TelegramMediaType enumeration. CamelTelegramMediaTitleCaption This header is used to provide a caption or title for outgoing binary messages. CamelTelegramParseMode This header is used to format text messages using HTML or Markdown (see org.apache.camel.component.telegram.TelegramParseMode ). 122.6. Usage The Telegram component supports both consumer and producer endpoints. It can also be used in reactive chat-bot mode (to consume, then produce messages). 122.7. Producer Example The following is a basic example of how to send a message to a Telegram chat through the Telegram Bot API. in Java DSL from("direct:start").to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"); or in Spring XML <route> <from uri="direct:start"/> <to uri="telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"/> <route> The code 123456789:insertYourAuthorizationTokenHere is the authorization token corresponding to the Bot. When using the producer endpoint without specifying the chat id option, the target chat will be identified using information contained in the body or headers of the message. The following message bodies are allowed for a producer endpoint (messages of type OutgoingXXXMessage belong to the package org.apache.camel.component.telegram.model ) Java Type Description OutgoingTextMessage To send a text message to a chat OutgoingPhotoMessage To send a photo (JPG, PNG) to a chat OutgoingAudioMessage To send a mp3 audio to a chat OutgoingVideoMessage To send a mp4 video to a chat OutgoingDocumentMessage To send a file to a chat (any media type) OutgoingStickerMessage To send a sticker to a chat (WEBP) OutgoingAnswerInlineQuery To send answers to an inline query EditMessageTextMessage To edit text and game messages (editMessageText) EditMessageCaptionMessage To edit captions of messages (editMessageCaption) EditMessageMediaMessage To edit animation, audio, document, photo, or video messages. (editMessageMedia) EditMessageReplyMarkupMessage To edit only the reply markup of message. (editMessageReplyMarkup) EditMessageDelete To delete a message, including service messages. (deleteMessage) SendLocationMessage To send a location (setSendLocation) EditMessageLiveLocationMessage To send changes to a live location (editMessageLiveLocation) StopMessageLiveLocationMessage To stop updating a live location message sent by the bot or via the bot (for inline bots) before live_period expires (stopMessageLiveLocation) SendVenueMessage To send information about a venue (sendVenue) byte[] To send any media type supported. It requires the CamelTelegramMediaType header to be set to the appropriate media type String To send a text message to a chat. It gets converted automatically into a OutgoingTextMessage 122.8. Consumer Example The following is a basic example of how to receive all messages that telegram users are sending to the configured Bot. In Java DSL from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere") .bean(ProcessorBean.class) or in Spring XML <route> <from uri="telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"/> <bean ref="myBean" /> <route> <bean id="myBean" class="com.example.MyBean"/> The MyBean is a simple bean that will receive the messages public class MyBean { public void process(String message) { // or Exchange, or org.apache.camel.component.telegram.model.IncomingMessage (or both) // do process } } Supported types for incoming messages are Java Type Description IncomingMessage The full object representation of an incoming message String The content of the message, for text messages only 122.9. Reactive Chat-Bot Example The reactive chat-bot mode is a simple way of using the Camel component to build a simple chat bot that replies directly to chat messages received from the Telegram users. The following is a basic configuration of the chat-bot in Java DSL from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere") .bean(ChatBotLogic.class) .to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"); or in Spring XML <route> <from uri="telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"/> <bean ref="chatBotLogic" /> <to uri="telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"/> <route> <bean id="chatBotLogic" class="com.example.ChatBotLogic"/> The ChatBotLogic is a simple bean that implements a generic String-to-String method. public class ChatBotLogic { public String chatBotProcess(String message) { if( "do-not-reply".equals(message) ) { return null; // no response in the chat } return "echo from the bot: " + message; // echoes the message } } Every non-null string returned by the chatBotProcess method is automatically routed to the chat that originated the request (as the CamelTelegramChatId header is used to route the message). 122.10. Getting the Chat ID If you want to push messages to a specific Telegram chat when an event occurs, you need to retrieve the corresponding chat ID. The chat ID is not currently shown in the telegram client, but you can obtain it using a simple route. First, add the bot to the chat where you want to push messages, then run a route like the following one. from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere") .to("log:INFO?showHeaders=true"); Any message received by the bot will be dumped to your log together with information about the chat ( CamelTelegramChatId header). Once you get the chat ID, you can use the following sample route to push message to it. from("timer:tick") .setBody().constant("Hello") to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere&chatId=123456") Note that the corresponding URI parameter is simply chatId . 122.11. Customizing keyboard You can customize the user keyboard instead of asking him to write an option. OutgoingTextMessage has the property ReplyMarkup which can be used for such thing. from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere") .process(exchange -> { OutgoingTextMessage msg = new OutgoingTextMessage(); msg.setText("Choose one option!"); InlineKeyboardButton buttonOptionOneI = InlineKeyboardButton.builder() .text("Option One - I").build(); InlineKeyboardButton buttonOptionOneII = InlineKeyboardButton.builder() .text("Option One - II").build(); InlineKeyboardButton buttonOptionTwoI = InlineKeyboardButton.builder() .text("Option Two - I").build(); ReplyKeyboardMarkup replyMarkup = ReplyKeyboardMarkup.builder() .keyboard() .addRow(Arrays.asList(buttonOptionOneI, buttonOptionOneII)) .addRow(Arrays.asList(buttonOptionTwoI)) .close() .oneTimeKeyboard(true) .build(); msg.setReplyMarkup(replyMarkup); exchange.getIn().setBody(msg); }) .to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"); If you want to disable it the message must have the property removeKeyboard set on ReplyKeyboardMarkup object. from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere") .process(exchange -> { OutgoingTextMessage msg = new OutgoingTextMessage(); msg.setText("Your answer was accepted!"); ReplyKeyboardMarkup replyMarkup = ReplyKeyboardMarkup.builder() .removeKeyboard(true) .build(); msg.setReplyKeyboardMarkup(replyMarkup); exchange.getIn().setBody(msg); }) .to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"); 122.12. Webhook Mode The Telegram component supports usage in the webhook mode using the camel-webhook component. In order to enable webhook mode, users need first to add a REST implementation to their application. Maven users, for example, can add netty-http to their pom.xml file: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-netty-http</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> Once done, you need to prepend the webhook URI to the telegram URI you want to use. In Java DSL: from("webhook:telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere").to("log:info"); Some endpoints will be exposed by your application and Telegram will be configured to send messages to them. You need to ensure that your server is exposed to the internet and to pass the right value of the camel.component.webhook.configuration.webhook-external-url property. Refer to the camel-webhook component documentation for instructions on how to set it. 122.13. Spring Boot Auto-Configuration The component supports 8 options, which are listed below. Name Description Default Type camel.component.telegram.authorization-token The default Telegram authorization token to be used when the information is not provided in the endpoints. String camel.component.telegram.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.telegram.base-uri Can be used to set an alternative base URI, e.g. when you want to test the component against a mock Telegram API. String camel.component.telegram.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.telegram.client To use a custom AsyncHttpClient. The option is a org.asynchttpclient.AsyncHttpClient type. AsyncHttpClient camel.component.telegram.client-config To configure the AsyncHttpClient to use a custom com.ning.http.client.AsyncHttpClientConfig instance. The option is a org.asynchttpclient.AsyncHttpClientConfig type. AsyncHttpClientConfig camel.component.telegram.enabled Whether to enable auto configuration of the telegram component. This is enabled by default. Boolean camel.component.telegram.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-telegram-starter</artifactId> </dependency>",
"telegram:type[?options]",
"telegram:type",
"from(\"direct:start\").to(\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\");",
"<route> <from uri=\"direct:start\"/> <to uri=\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\"/> <route>",
"from(\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\") .bean(ProcessorBean.class)",
"<route> <from uri=\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\"/> <bean ref=\"myBean\" /> <route> <bean id=\"myBean\" class=\"com.example.MyBean\"/>",
"public class MyBean { public void process(String message) { // or Exchange, or org.apache.camel.component.telegram.model.IncomingMessage (or both) // do process } }",
"from(\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\") .bean(ChatBotLogic.class) .to(\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\");",
"<route> <from uri=\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\"/> <bean ref=\"chatBotLogic\" /> <to uri=\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\"/> <route> <bean id=\"chatBotLogic\" class=\"com.example.ChatBotLogic\"/>",
"public class ChatBotLogic { public String chatBotProcess(String message) { if( \"do-not-reply\".equals(message) ) { return null; // no response in the chat } return \"echo from the bot: \" + message; // echoes the message } }",
"from(\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\") .to(\"log:INFO?showHeaders=true\");",
"from(\"timer:tick\") .setBody().constant(\"Hello\") to(\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere&chatId=123456\")",
"from(\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\") .process(exchange -> { OutgoingTextMessage msg = new OutgoingTextMessage(); msg.setText(\"Choose one option!\"); InlineKeyboardButton buttonOptionOneI = InlineKeyboardButton.builder() .text(\"Option One - I\").build(); InlineKeyboardButton buttonOptionOneII = InlineKeyboardButton.builder() .text(\"Option One - II\").build(); InlineKeyboardButton buttonOptionTwoI = InlineKeyboardButton.builder() .text(\"Option Two - I\").build(); ReplyKeyboardMarkup replyMarkup = ReplyKeyboardMarkup.builder() .keyboard() .addRow(Arrays.asList(buttonOptionOneI, buttonOptionOneII)) .addRow(Arrays.asList(buttonOptionTwoI)) .close() .oneTimeKeyboard(true) .build(); msg.setReplyMarkup(replyMarkup); exchange.getIn().setBody(msg); }) .to(\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\");",
"from(\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\") .process(exchange -> { OutgoingTextMessage msg = new OutgoingTextMessage(); msg.setText(\"Your answer was accepted!\"); ReplyKeyboardMarkup replyMarkup = ReplyKeyboardMarkup.builder() .removeKeyboard(true) .build(); msg.setReplyKeyboardMarkup(replyMarkup); exchange.getIn().setBody(msg); }) .to(\"telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\");",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-netty-http</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>",
"from(\"webhook:telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere\").to(\"log:info\");"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-telegram-component-starter
|
2.8.3.3. Saving and Restoring IPTables Rules
|
2.8.3.3. Saving and Restoring IPTables Rules Changes to iptables are transitory; if the system is rebooted or if the iptables service is restarted, the rules are automatically flushed and reset. To save the rules so that they are loaded when the iptables service is started, use the following command as the root user: The rules are stored in the file /etc/sysconfig/iptables and are applied whenever the service is started or the machine is rebooted.
|
[
"~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-using_iptables-saving_and_restoring_iptables_rules
|
Chapter 36. Disabling Anonymous Binds
|
Chapter 36. Disabling Anonymous Binds Accessing domain resources and running client tools always require Kerberos authentication. However, the back end LDAP directory used by the IdM server allows anonymous binds by default. This potentially opens up all of the domain configuration to unauthorized users, including information about users, machines, groups, services, netgroups, and DNS configuration. It is possible to disable anonymous binds on the 389 Directory Server instance by using LDAP tools to reset the nsslapd-allow-anonymous-access attribute. Warning Certain clients rely on anonymous binds to discover IdM settings. Additionally, the compat tree can break for legacy clients that are not using authentication. Change the nsslapd-allow-anonymous-access attribute to rootdse . Important Anonymous access can be completely allowed (on) or completely blocked (off). However, completely blocking anonymous access also blocks external clients from checking the server configuration. LDAP and web clients are not necessarily domain clients, so they connect anonymously to read the root DSE file to get connection information. The rootdse allows access to the root DSE and server configuration without any access to the directory data. Restart the 389 Directory Server instance to load the new setting. Additional Resources: The Managing Entries Using the Command Line section in the Red Hat Directory Server Administration Guide .
|
[
"ldapmodify -x -D \"cn=Directory Manager\" -W -h server.example.com -p 389 -ZZ Enter LDAP Password: dn: cn=config changetype: modify replace: nsslapd-allow-anonymous-access nsslapd-allow-anonymous-access: rootdse modifying entry \"cn=config\"",
"systemctl restart dirsrv.target"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/disabling-anon-binds
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.11/making-open-source-more-inclusive
|
Chapter 34. Desktop
|
Chapter 34. Desktop Broken pygobject3 package dependencies prevent upgrade from Red Hat Enterprise Linux 7.1 The pygobject3-devel.i686 32-bit package has been removed in Red Hat Enterprise Linux 7.2 and was replaced with a multilib version. If you have the 32-bit version of the package installed on a Red Hat Enterprise Linux 7.1 system, then you will encounter a yum error when attempting to upgrade to Red Hat Enterprise Linux 7.2. To work around this problem, use the yum remove pygobject3-devel.i686 command as root to uninstall the 32-bit version of the package before upgrading your system. Build requirements not defined correctly for Emacs The binutils package earlier than version 2.23.52.0.1-54 causes a segmentation fault during the build. As a consequence, it is not possible to build the Emacs text editor on IBM Power Systems. To work around this problem, install the latest binutils . External display issues when combining laptop un/dock and suspend In the GNOME desktop environment, with some laptops, external displays connected to a docking station might not be automatically activated when resuming a suspended laptop after it has been undocked and docked again. To work around this problem, open the Displays configuration panel or run the xrandr command in a terminal. This makes the external displays available again. Emacs sometimes terminates unexpectedly when using the up arrow on ARM On the ARM architecture, the Emacs text editor sometimes terminates unexpectedly with a segmentation fault when scrolling up a file buffer. This happens only when the syntax highlighting is enabled. There is not currently any known workaround for this problem.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/known-issues-desktop
|
9.3.2. Managing HA Services with clusvcadm
|
9.3.2. Managing HA Services with clusvcadm You can manage HA services using the clusvcadm command. With it you can perform the following operations: Enable and start a service. Disable a service. Stop a service. Freeze a service Unfreeze a service Migrate a service (for virtual machine services only) Relocate a service. Restart a service. Restart failed non-critical resources in a resource group Table 9.2, "Service Operations" describes the operations in more detail. For a complete description on how to perform those operations, see the clusvcadm utility man page. Table 9.2. Service Operations Service Operation Description Command Syntax Enable Start the service, optionally on a preferred target and optionally according to failover domain rules. In the absence of either a preferred target or failover domain rules, the local host where clusvcadm is run will start the service. If the original start fails, the service behaves as though a relocate operation was requested (see Relocate in this table). If the operation succeeds, the service is placed in the started state. clusvcadm -e <service_name> or clusvcadm -e <service_name> -m <member> (Using the -m option specifies the preferred target member on which to start the service.) Disable Stop the service and place into the disabled state. This is the only permissible operation when a service is in the failed state. clusvcadm -d <service_name> Relocate Move the service to another node. Optionally, you may specify a preferred node to receive the service, but the inability of the service to run on that host (for example, if the service fails to start or the host is offline) does not prevent relocation, and another node is chosen. rgmanager attempts to start the service on every permissible node in the cluster. If no permissible target node in the cluster successfully starts the service, the relocation fails and the service is attempted to be restarted on the original owner. If the original owner cannot restart the service, the service is placed in the stopped state. clusvcadm -r <service_name> or clusvcadm -r <service_name> -m <member> (Using the -m option specifies the preferred target member on which to start the service.) Stop Stop the service and place into the stopped state. clusvcadm -s <service_name> Freeze Freeze a service on the node where it is currently running. This prevents status checks of the service as well as failover in the event the node fails or rgmanager is stopped. This can be used to suspend a service to allow maintenance of underlying resources. Refer to the section called "Considerations for Using the Freeze and Unfreeze Operations" for important information about using the freeze and unfreeze operations. clusvcadm -Z <service_name> Unfreeze Unfreeze takes a service out of the freeze state. This re-enables status checks. Refer to the section called "Considerations for Using the Freeze and Unfreeze Operations" for important information about using the freeze and unfreeze operations. clusvcadm -U <service_name> Migrate Migrate a virtual machine to another node. You must specify a target node. Depending on the failure, a failure to migrate may result with the virtual machine in the failed state or in the started state on the original owner. clusvcadm -M <service_name> -m <member> Important For the migrate operation, you must specify a target node using the -m <member> option. Restart Restart a service on the node where it is currently running. clusvcadm -R <service_name> Convalesce Convalesce (repair, fix) a resource group. Whenever a non-critical subtree's maximum restart threshold is exceeded, the subtree is stopped, and the service gains a P flag (partial), which is displayed in the output of the clustat command to one of the cluster resource groups. The convalesce operation attempts to start failed, non-critical resources in a service group and clears the P flag if the failed, non-critical resources successfully start. clusvcadm -c <service_name> Considerations for Using the Freeze and Unfreeze Operations Using the freeze operation allows maintenance of parts of rgmanager services. For example, if you have a database and a web server in one rgmanager service, you may freeze the rgmanager service, stop the database, perform maintenance, restart the database, and unfreeze the service. When a service is frozen, it behaves as follows: Status checks are disabled. Start operations are disabled. Stop operations are disabled. Failover will not occur (even if you power off the service owner). Important Failure to follow these guidelines may result in resources being allocated on multiple hosts: You must not stop all instances of rgmanager when a service is frozen unless you plan to reboot the hosts prior to restarting rgmanager. You must not unfreeze a service until the reported owner of the service rejoins the cluster and restarts rgmanager.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-admin-manage-ha-services-operations-cli-ca
|
Chapter 48. Security
|
Chapter 48. Security USBGuard enables blocking USB devices while the screen is locked as a Technology Preview With the USBGuard framework, you can influence how an already running usbguard-daemon instance handles newly inserted USB devices by setting the value of the InsertedDevicePolicy runtime parameter. This functionality is provided as a Technology Preview, and the default choice is to apply the policy rules to figure out whether to authorize the device or not. See the Blocking USB devices while the screen is locked Knowledge Base article: https://access.redhat.com/articles/3230621 (BZ#1480100) pk12util can now import certificates signed with RSA-PSS The pk12util tool now provides importing a certificate signed with the RSA-PSS algorithm as a Technology Preview. Note that if the corresponding private key is imported and has the PrivateKeyInfo.privateKeyAlgorithm field that restricts the signing algorithm to RSA-PSS , it is ignored when importing the key to a browser. See https://bugzilla.mozilla.org/show_bug.cgi?id=1413596 for more information. (BZ# 1431210 ) Support for certificates signed with RSA-PSS in certutil has been improved Support for certificates signed with the RSA-PSS algorithm in the certutil tool has been improved. Notable enhancements and fixes include: The --pss option is now documented. The PKCS#1 v1.5 algorithm is no longer used for self-signed signatures when a certificate is restricted to use RSA-PSS . Empty RSA-PSS parameters in the subjectPublicKeyInfo field are no longer printed as invalid when listing certificates. The --pss-sign option for creating regular RSA certificates signed with the RSA-PSS algorithm has been added. Support for certificates signed with RSA-PSS in certutil is provided as a Technology Preview. (BZ# 1425514 ) NSS is now able to verify RSA-PSS signatures on certificates With the new version of the nss package, the Network Security Services (NSS) libraries now provide verifying RSA-PSS signatures on certificates as a Technology Preview. Prior to this update, clients using NSS as the SSL backend were not able to establish a TLS connection to a server that offered only certificates signed with the RSA-PSS algorithm. Note that the functionality has the following limitations: The algorithm policy settings in the /etc/pki/nss-legacy/rhel7.config file do not apply to the hash algorithms used in RSA-PSS signatures. RSA-PSS parameters restrictions between certificate chains are ignored and only a single certificate is taken into account. (BZ# 1432142 ) SECCOMP can be now enabled in libreswan As a Technology Preview, the seccomp=enabled|tolerant|disabled option has been added to the ipsec.conf configuration file, which makes it possible to use the Secure Computing mode (SECCOMP). This improves the syscall security by whitelisting all the system calls that Libreswan is allowed to execute. For more information, see the ipsec.conf(5) man page. (BZ# 1375750 )
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/technology_previews_security
|
Part I. Planning a Red Hat Process Automation installation
|
Part I. Planning a Red Hat Process Automation installation As a system administrator, you have several options for installing Red Hat Process Automation.
| null |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/assembly-planning
|
2.8. Installing GFS
|
2.8. Installing GFS Installing GFS consists of installing Red Hat GFS RPMs on nodes in a Red Hat cluster. Before installing the RPMs, make sure of the following: The cluster nodes meet the system requirements described in this chapter. You have noted the key characteristics of your GFS configuration (refer to Section 1.4, "Before Setting Up GFS" ). The correct Red Hat Cluster Suite software is installed in the cluster. For information on installing RPMS for Red Hat Cluster Suite and Red Hat GFS, see Configuring and Managing a Red Hat Cluster . If you have already installed the appropriate Red Hat Cluster Suite RPMs, follow the procedures that pertain to installing the Red Hat GFS RPMs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/ch-install
|
Providing feedback on Red Hat build of Quarkus documentation
|
Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_yaml_file/proc_providing-feedback-on-red-hat-documentation_quarkus-configuration-guide
|
Chapter 6. Managing metrics
|
Chapter 6. Managing metrics You can collect metrics to monitor how cluster components and your own workloads are performing. 6.1. Understanding metrics In Red Hat OpenShift Service on AWS, cluster components are monitored by scraping metrics exposed through service endpoints. You can also configure metrics collection for user-defined projects. Metrics enable you to monitor how cluster components and your own workloads are performing. You can define the metrics that you want to provide for your own workloads by using Prometheus client libraries at the application level. In Red Hat OpenShift Service on AWS, metrics are exposed through an HTTP service endpoint under the /metrics canonical name. You can list all available metrics for a service by running a curl query against http://<endpoint>/metrics . For instance, you can expose a route to the prometheus-example-app example application and then run the following to view all of its available metrics: USD curl http://<example_app_endpoint>/metrics Example output # HELP http_requests_total Count of all HTTP requests # TYPE http_requests_total counter http_requests_total{code="200",method="get"} 4 http_requests_total{code="404",method="get"} 2 # HELP version Version information about this binary # TYPE version gauge version{version="v0.1.0"} 1 Additional resources Prometheus client library documentation 6.2. Setting up metrics collection for user-defined projects You can create a ServiceMonitor resource to scrape metrics from a service endpoint in a user-defined project. This assumes that your application uses a Prometheus client library to expose metrics to the /metrics canonical name. This section describes how to deploy a sample service in a user-defined project and then create a ServiceMonitor resource that defines how that service should be monitored. 6.2.1. Deploying a sample service To test monitoring of a service in a user-defined project, you can deploy a sample service. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. Procedure Create a YAML file for the service configuration. In this example, it is called prometheus-example-app.yaml . Add the following deployment and service configuration details to the file: apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP This configuration deploys a service named prometheus-example-app in the user-defined ns1 project. This service exposes the custom version metric. Apply the configuration to the cluster: USD oc apply -f prometheus-example-app.yaml It takes some time to deploy the service. You can check that the pod is running: USD oc -n ns1 get pod Example output NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m 6.2.2. Specifying how a service is monitored To use the metrics exposed by your service, you must configure Red Hat OpenShift Service on AWS monitoring to scrape metrics from the /metrics endpoint. You can do this using a ServiceMonitor custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod. This procedure shows you how to create a ServiceMonitor resource for a service in a user-defined project. Prerequisites You have access to the cluster as a user with the dedicated-admin role or the monitoring-edit role. For this example, you have deployed the prometheus-example-app sample service in the ns1 project. Note The prometheus-example-app sample service does not support TLS authentication. Procedure Create a new YAML configuration file named example-app-service-monitor.yaml . Add a ServiceMonitor resource to the YAML file. The following example creates a service monitor named prometheus-example-monitor to scrape metrics exposed by the prometheus-example-app service in the ns1 namespace: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app 1 Specify a user-defined namespace where your service runs. 2 Specify endpoint ports to be scraped by Prometheus. 3 Configure a selector to match your service based on its metadata labels. Note A ServiceMonitor resource in a user-defined namespace can only discover services in the same namespace. That is, the namespaceSelector field of the ServiceMonitor resource is always ignored. Apply the configuration to the cluster: USD oc apply -f example-app-service-monitor.yaml It takes some time to deploy the ServiceMonitor resource. Verify that the ServiceMonitor resource is running: USD oc -n <namespace> get servicemonitor Example output NAME AGE prometheus-example-monitor 81m 6.2.3. Example service endpoint authentication settings You can configure authentication for service endpoints for user-defined project monitoring by using ServiceMonitor and PodMonitor custom resource definitions (CRDs). The following samples show different authentication settings for a ServiceMonitor resource. Each sample shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. 6.2.3.1. Sample YAML authentication with a bearer token The following sample shows bearer token settings for a Secret object named example-bearer-auth in the ns1 namespace: Example bearer token secret apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1 1 Specify an authentication token. The following sample shows bearer token authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-bearer-auth : Example bearer token authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the authentication token in the specified Secret object. 2 The name of the Secret object that contains the authentication credentials. Important Do not use bearerTokenFile to configure bearer token. If you use the bearerTokenFile configuration, the ServiceMonitor resource is rejected. 6.2.3.2. Sample YAML for Basic authentication The following sample shows Basic authentication settings for a Secret object named example-basic-auth in the ns1 namespace: Example Basic authentication secret apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2 1 Specify a username for authentication. 2 Specify a password for authentication. The following sample shows Basic authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-basic-auth : Example Basic authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the username in the specified Secret object. 2 4 The name of the Secret object that contains the Basic authentication. 3 The key that contains the password in the specified Secret object. 6.2.3.3. Sample YAML authentication with OAuth 2.0 The following sample shows OAuth 2.0 settings for a Secret object named example-oauth2 in the ns1 namespace: Example OAuth 2.0 secret apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 1 Specify an Oauth 2.0 ID. 2 Specify an Oauth 2.0 secret. The following sample shows OAuth 2.0 authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-oauth2 : Example OAuth 2.0 authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the OAuth 2.0 ID in the specified Secret object. 2 4 The name of the Secret object that contains the OAuth 2.0 credentials. 3 The key that contains the OAuth 2.0 secret in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . Additional resources How to scrape metrics using TLS in a ServiceMonitor configuration in a user-defined project 6.3. Querying metrics for all projects with the Red Hat OpenShift Service on AWS web console You can use the Red Hat OpenShift Service on AWS metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring. As a dedicated-admin or as a user with view permissions for all projects, you can access metrics for all default Red Hat OpenShift Service on AWS and user-defined projects in the Metrics UI. Note Only dedicated administrators have access to the third-party UIs provided with Red Hat OpenShift Service on AWS monitoring. The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries. Prerequisites You have access to the cluster as a user with the dedicated-admin role or with view permissions for all projects. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective of the Red Hat OpenShift Service on AWS web console, click Observe and go to the Metrics tab. To add one or more queries, perform any of the following actions: Option Description Select an existing query. From the Select query drop-down list, select an existing query. Create a custom query. Add your Prometheus Query Language (PromQL) query to the Expression field. As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item. Add multiple queries. Click Add query . Duplicate an existing query. Click the options menu to the query, then choose Duplicate query . Disable a query from being run. Click the options menu to the query and choose Disable query . To run queries that you created, click Run queries . The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message. Note When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs. By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query. Optional: Save the page URL to use this set of queries again in the future. Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions: Option Description Hide all metrics from a query. Click the options menu for the query and click Hide all series . Hide a specific metric. Go to the query table and click the colored square near the metric name. Zoom into the plot and change the time range. Perform one of the following actions: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu to select the time range. Reset the time range. Click Reset zoom . Display outputs for all queries at a specific point in time. Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box. Hide the plot. Click Hide graph . Additional resources Prometheus query documentation 6.4. Querying metrics for user-defined projects with the Red Hat OpenShift Service on AWS web console You can use the Red Hat OpenShift Service on AWS metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about any user-defined workloads that you are monitoring. As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project. The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet. These queries are restricted to the selected project. You can also run custom Prometheus Query Language (PromQL) queries for the project. Note Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time. Developers cannot access the third-party UIs provided with Red Hat OpenShift Service on AWS monitoring. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. You have enabled monitoring for user-defined projects. You have deployed a service in a user-defined project. You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored. Procedure In the Developer perspective of the Red Hat OpenShift Service on AWS web console, click Observe and go to the Metrics tab. Select the project that you want to view metrics for from the Project: list. To add one or more queries, perform any of the following actions: Option Description Select an existing query. From the Select query drop-down list, select an existing query. Create a custom query. Add your Prometheus Query Language (PromQL) query to the Expression field. As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item. Add multiple queries. Click Add query . Duplicate an existing query. Click the options menu to the query, then choose Duplicate query . Disable a query from being run. Click the options menu to the query and choose Disable query . To run queries that you created, click Run queries . The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message. Note When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs. By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query. Optional: Save the page URL to use this set of queries again in the future. Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions: Option Description Hide all metrics from a query. Click the options menu for the query and click Hide all series . Hide a specific metric. Go to the query table and click the colored square near the metric name. Zoom into the plot and change the time range. Perform one of the following actions: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu to select the time range. Reset the time range. Click Reset zoom . Display outputs for all queries at a specific point in time. Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box. Hide the plot. Click Hide graph . Additional resources Prometheus query documentation 6.5. Getting detailed information about a metrics target You can use the Red Hat OpenShift Service on AWS web console to view, search, and filter the endpoints that are currently targeted for scraping, which helps you to identify and troubleshoot problems. For example, you can view the current status of targeted endpoints to see when Red Hat OpenShift Service on AWS monitoring is not able to scrape metrics from a targeted component. The Metrics targets page shows targets for user-defined projects. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Procedure In the Administrator perspective of the Red Hat OpenShift Service on AWS web console, go to Observe Targets . The Metrics targets page opens with a list of all service endpoint targets that are being scraped for metrics. This page shows details about targets for default Red Hat OpenShift Service on AWS and user-defined projects. This page lists the following information for each target: Service endpoint URL being scraped The ServiceMonitor resource being monitored The up or down status of the target Namespace Last scrape time Duration of the last scrape Optional: To find a specific target, perform any of the following actions: Option Description Filter the targets by status and source. Choose filters in the Filter list. The following filtering options are available: Status filters: Up . The target is currently up and being actively scraped for metrics. Down . The target is currently down and not being scraped for metrics. Source filters: Platform . Platform-level targets relate only to default Red Hat OpenShift Service on AWS projects. These projects provide core Red Hat OpenShift Service on AWS functionality. User . User targets relate to user-defined projects. These projects are user-created and can be customized. Search for a target by name or label. Enter a search term in the Text or Label field to the search box. Sort the targets. Click one or more of the Endpoint Status , Namespace , Last Scrape , and Scrape Duration column headers. Click the URL in the Endpoint column for a target to go to its Target details page. This page provides information about the target, including the following information: The endpoint URL being scraped for metrics The current Up or Down status of the target A link to the namespace A link to the ServiceMonitor resource details Labels attached to the target The most recent time that the target was scraped for metrics
|
[
"curl http://<example_app_endpoint>/metrics",
"HELP http_requests_total Count of all HTTP requests TYPE http_requests_total counter http_requests_total{code=\"200\",method=\"get\"} 4 http_requests_total{code=\"404\",method=\"get\"} 2 HELP version Version information about this binary TYPE version gauge version{version=\"v0.1.0\"} 1",
"apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP",
"oc apply -f prometheus-example-app.yaml",
"oc -n ns1 get pod",
"NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app",
"oc apply -f example-app-service-monitor.yaml",
"oc -n <namespace> get servicemonitor",
"NAME AGE prometheus-example-monitor 81m",
"apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/monitoring/managing-metrics
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/proc_providing-feedback-on-red-hat-documentation
|
Chapter 1. OpenShift Container Platform CLI tools overview
|
Chapter 1. OpenShift Container Platform CLI tools overview A user performs a range of operations while working on OpenShift Container Platform such as the following: Managing clusters Building, deploying, and managing applications Managing deployment processes Developing Operators Creating and maintaining Operator catalogs OpenShift Container Platform offers a set of command-line interface (CLI) tools that simplify these tasks by enabling users to perform various administration and development operations from the terminal. These tools expose simple commands to manage the applications, as well as interact with each component of the system. 1.1. List of CLI tools The following set of CLI tools are available in OpenShift Container Platform: OpenShift CLI ( oc ) : This is the most commonly used CLI tool by OpenShift Container Platform users. It helps both cluster administrators and developers to perform end-to-end operations across OpenShift Container Platform using the terminal. Unlike the web console, it allows the user to work directly with the project source code using command scripts. Knative CLI (kn) : The Knative ( kn ) CLI tool provides simple and intuitive terminal commands that can be used to interact with OpenShift Serverless components, such as Knative Serving and Eventing. Pipelines CLI (tkn) : OpenShift Pipelines is a continuous integration and continuous delivery (CI/CD) solution in OpenShift Container Platform, which internally uses Tekton. The tkn CLI tool provides simple and intuitive commands to interact with OpenShift Pipelines using the terminal. opm CLI : The opm CLI tool helps the Operator developers and cluster administrators to create and maintain the catalogs of Operators from the terminal. Operator SDK : The Operator SDK, a component of the Operator Framework, provides a CLI tool that Operator developers can use to build, test, and deploy an Operator from the terminal. It simplifies the process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cli_tools/cli-tools-overview
|
Chapter 22. Configuring Identity
|
Chapter 22. Configuring Identity The director includes parameters to help configure Identity Service (keystone) settings: 22.1. Region Name By default, your overcloud's region will be named regionOne . You can change this by adding a KeystoneRegion entry your environment file. This setting cannot be changed post-deployment:
|
[
"parameter_defaults: KeystoneRegion: 'SampleRegion'"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/sect-identity_configuration
|
Chapter 1. Audit logs in Red Hat Developer Hub
|
Chapter 1. Audit logs in Red Hat Developer Hub Audit logs are a chronological set of records documenting the user activities, system events, and data changes that affect your Red Hat Developer Hub users, administrators, or components. Administrators can view Developer Hub audit logs in the OpenShift Container Platform web console to monitor scaffolder events, changes to the RBAC system, and changes to the Catalog database. Audit logs include the following information: Name of the audited event Actor that triggered the audited event, for example, terminal, port, IP address, or hostname Event metadata, for example, date, time Event status, for example, success , failure Severity levels, for example, info , debug , warn , error You can use the information in the audit log to achieve the following goals: Enhance security Trace activities, including those initiated by automated systems and software templates, back to their source. Know when software templates are executed, as well as the details of application and component installations, updates, configuration changes, and removals. Automate compliance Use streamlined processes to view log data for specified points in time for auditing purposes or continuous compliance maintenance. Debug issues Use access records and activity details to fix issues with software templates or plugins. Note Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured. Additional resources For more information about logging in OpenShift Container Platform, see About Logging For a complete list of fields that a Developer Hub audit log can include, see Section 1.2.1, "Audit log fields" For a list of scaffolder events that a Developer Hub audit log can include, see Section 1.2.2, "Scaffolder events" 1.1. Configuring audit logs for Developer Hub on OpenShift Container Platform Use the OpenShift Container Platform web console to configure the following OpenShift Container Platform logging components to use audit logging for Developer Hub: Logging deployment Configure the logging environment, including both the CPU and memory limits for each logging component. For more information, see Red Hat OpenShift Container Platform - Configuring your Logging deployment . Logging collector Configure the spec.collection stanza in the ClusterLogging custom resource (CR) to use a supported modification to the log collector and collect logs from STDOUT . For more information, see Red Hat OpenShift Container Platform - Configuring the logging collector . Log forwarding Send logs to specific endpoints inside and outside your OpenShift Container Platform cluster by specifying a combination of outputs and pipelines in a ClusterLogForwarder CR. For more information, see Red Hat OpenShift Container Platform - Enabling JSON log forwarding and Red Hat OpenShift Container Platform - Configuring log forwarding . 1.2. Viewing audit logs in Developer Hub Administrators can view, search, filter, and manage the log data from the Red Hat OpenShift Container Platform web console. You can filter audit logs from other log types by using the isAuditLog field. Prerequisites You are logged in as an administrator in the OpenShift Container Platform web console. Procedure From the Developer perspective of the OpenShift Container Platform web console, click the Topology tab. From the Topology view, click the pod that you want to view audit log data for. From the pod panel, click the Resources tab. From the Pods section of the Resources tab, click View logs . From the Logs view, enter isAuditLog into the Search field to filter audit logs from other log types. You can use the arrows to browse the logs containing the isAuditLog field. 1.2.1. Audit log fields Developer Hub audit logs can include the following fields: eventName The name of the audited event. actor An object containing information about the actor that triggered the audited event. Contains the following fields: actorId The name/id/ entityRef of the associated user or service. Can be null if an unauthenticated user accesses the endpoints and the default authentication policy is disabled. ip The IP address of the actor (optional). hostname The hostname of the actor (optional). client The user agent of the actor (optional). stage The stage of the event at the time that the audit log was generated, for example, initiation or completion . status The status of the event, for example, succeeded or failed . meta An optional object containing event specific data, for example, taskId . request An optional field that contains information about the HTTP request sent to an endpoint. Contains the following fields: method The HTTP method of the request. query The query fields of the request. params The params fields of the request. body The request body . The secrets provided when creating a task are redacted and appear as * . url The endpoint URL of the request. response An optional field that contains information about the HTTP response sent from an endpoint. Contains the following fields: status The status code of the HTTP response. body The contents of the request body. isAuditLog A flag set to true to differentiate audit logs from other log types. errors A list of errors containing the name , message and potentially the stack field of the error. Only appears when status is failed . 1.2.2. Scaffolder events Developer Hub audit logs can include the following scaffolder events: ScaffolderParameterSchemaFetch Tracks GET requests to the /v2/templates/:namespace/:kind/:name/parameter-schema endpoint which return template parameter schemas ScaffolderInstalledActionsFetch Tracks GET requests to the /v2/actions endpoint which grabs the list of installed actions ScaffolderTaskCreation Tracks POST requests to the /v2/tasks endpoint which creates tasks that the scaffolder executes ScaffolderTaskListFetch Tracks GET requests to the /v2/tasks endpoint which fetches details of all tasks in the scaffolder. ScaffolderTaskFetch Tracks GET requests to the /v2/tasks/:taskId endpoint which fetches details of a specified task :taskId ScaffolderTaskCancellation Tracks POST requests to the /v2/tasks/:taskId/cancel endpoint which cancels a running task ScaffolderTaskStream Tracks GET requests to the /v2/tasks/:taskId/eventstream endpoint which returns an event stream of the task logs of task :taskId ScaffolderTaskEventFetch Tracks GET requests to the /v2/tasks/:taskId/events endpoint which returns a snapshot of the task logs of task :taskId ScaffolderTaskDryRun Tracks POST requests to the /v2/dry-run endpoint which creates a dry-run task. All audit logs for events associated with dry runs have the meta.isDryLog flag set to true . ScaffolderStaleTaskCancellation Tracks automated cancellation of stale tasks ScaffolderTaskExecution Tracks the initiation and completion of a real scaffolder task execution (will not occur during dry runs) ScaffolderTaskStepExecution Tracks initiation and completion of a scaffolder task step execution ScaffolderTaskStepSkip Tracks steps skipped due to if conditionals not being met ScaffolderTaskStepIteration Tracks the step execution of each iteration of a task step that contains the each field. 1.2.3. Catalog events Developer Hub audit logs can include the following catalog events: CatalogEntityAncestryFetch Tracks GET requests to the /entities/by-name/:kind/:namespace/:name/ancestry endpoint, which returns the ancestry of an entity CatalogEntityBatchFetch Tracks POST requests to the /entities/by-refs endpoint, which returns a batch of entities CatalogEntityDeletion Tracks DELETE requests to the /entities/by-uid/:uid endpoint, which deletes an entity Note If the parent location of the deleted entity is still present in the catalog, then the entity is restored in the catalog during the processing cycle. CatalogEntityFacetFetch Tracks GET requests to the /entity-facets endpoint, which returns the facets of an entity CatalogEntityFetch Tracks GET requests to the /entities endpoint, which returns a list of entities CatalogEntityFetchByName Tracks GET requests to the /entities/by-name/:kind/:namespace/:name endpoint, which returns an entity matching the specified entity reference, for example, <kind>:<namespace>/<name> CatalogEntityFetchByUid Tracks GET requests to the /entities/by-uid/:uid endpoint, which returns an entity matching the unique ID of the specified entity CatalogEntityRefresh Tracks POST requests to the /entities/refresh endpoint, which schedules the specified entity to be refreshed CatalogEntityValidate Tracks POST requests to the /entities/validate endpoint, which validates the specified entity CatalogLocationCreation Tracks POST requests to the /locations endpoint, which creates a location Note A location is a marker that references other places to look for catalog data. CatalogLocationAnalyze Tracks POST requests to the /locations/analyze endpoint, which analyzes the specified location CatalogLocationDeletion Tracks DELETE requests to the /locations/:id endpoint, which deletes a location and all child entities associated with it CatalogLocationFetch Tracks GET requests to the /locations endpoint, which returns a list of locations CatalogLocationFetchByEntityRef Tracks GET requests to the /locations/by-entity endpoint, which returns a list of locations associated with the specified entity reference CatalogLocationFetchById Tracks GET requests to the /locations/:id endpoint, which returns a location matching the specified location ID QueriedCatalogEntityFetch Tracks GET requests to the /entities/by-query endpoint, which returns a list of entities matching the specified query 1.3. Audit log file rotation in Red Hat Developer Hub Logging to a rotating file in Red Hat Developer Hub is helpful for persistent storage of audit logs. Persistent storage ensures that the file remains intact even after a pod is restarted. Audit log file rotation creates a new file at regular intervals, with only new data being written to the latest file. Default settings Audit logging to a rotating file is disabled by default. When it is enabled, the default behavior changes to: Rotate logs at midnight (local system timezone). Log file format: redhat-developer-hub-audit-%DATE%.log . Log files are stored in /var/log/redhat-developer-hub/audit . No automatic log file deletion. No gzip compression of archived logs. No file size limit. Audit logs are written in the /var/log/redhat-developer-hub/audit directory. Log file names Audit log file names are in the following format: redhat-developer-hub-audit-%DATE%.log where %DATE% is the format specified in auditLog.rotateFile.dateFormat . You can customize file names when you configure file rotation. File rotation date and frequency Supported auditLog.rotateFile.frequency options include: daily : Rotate daily at 00:00 local time Xm : Rotate every X minutes (where X is a number between 0 and 59) Xh : Rotate every X hours (where X is a number between 0 and 23) test : Rotate every 1 minute custom : Use dateFormat to set the rotation frequency (default if frequency is not specified) If frequency is set to Xh , Xm or test , the dateFormat setting must be configured in a format that includes the specified time component. Otherwise, the rotation might not work as expected. For example, use dateFormat: 'YYYY-MM-DD-HH for hourly rotation, and dateFormat: 'YYYY-MM-DD-HH-mm for minute rotation. Example minute rotation: auditLog: rotateFile: # If you want to rotate the file every 17 minutes dateFormat: 'YYYY-MM-DD-HH-mm' frequency: '17m' The dateFormat setting configures both the %DATE% in logFileName and the file rotation frequency if frequency is set to custom . The default format is YYYY-MM-DD , meaning daily rotation. Supported values are based on Moment.js formats . If the frequency is set to custom , then rotations take place when the date string, which is represented in the specified dateFormat , changes. Archive and delete By default, log files are not archived or deleted. Enable and configure audit file rotation If you are an administrator of Developer Hub, you can enable file rotation for audit logs, and configure the file log location, name format, frequency, log file size, retention policy, and archiving. Example audit log file rotation configuration auditLog: rotateFile: enabled: true 1 logFileDirPath: /custom-path 2 logFileName: custom-audit-log-%DATE%.log 3 frequency: '12h' 4 dateFormat: 'YYYY-MM-DD' 5 utc: false 6 maxSize: 100m 7 maxFilesOrDays: 14 8 zippedArchive: true 9 1 Set enabled to true to use audit log file rotation. By default, it is set to false . 2 Absolute path to the log file. The specified directory is created automatically if it does not exist. 3 Default log file name format. 4 If no frequency is specified, then the default file rotation occurs daily at 00:00 local time. 5 Default date format. 6 Set utc to true to use UTC time for dateFormat instead of local time. 7 Sets a maximum file size limit for the audit log. In this example, the maximum size is 100m. 8 If set to number of files, for example 14 , then it deletes the oldest log when there are more than 14 log files. If set to number of days, for example 5d , then it deletes logs older than 5 days. 9 Archive and compress rotated logs using gzip . The default value is false . Note By default, log files are not archived or deleted. If log deletion is enabled, then a .<sha256 hash>-audit.json is generated in the directory where the logs are to track generated logs. Any log file not contained in the directory is not subject to automatic deletion. A new .<sha256 hash>-audit.json file is generated each time the backend starts, which causes audit logs to stop being tracked or deleted, except for those still in use by the current backend.
|
[
"auditLog: rotateFile: # If you want to rotate the file every 17 minutes dateFormat: 'YYYY-MM-DD-HH-mm' frequency: '17m'",
"auditLog: rotateFile: enabled: true 1 logFileDirPath: /custom-path 2 logFileName: custom-audit-log-%DATE%.log 3 frequency: '12h' 4 dateFormat: 'YYYY-MM-DD' 5 utc: false 6 maxSize: 100m 7 maxFilesOrDays: 14 8 zippedArchive: true 9"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/audit_log/assembly-audit-log
|
Chapter 4. Deprecated and unsupported Red Hat build of OpenJDK capabilities
|
Chapter 4. Deprecated and unsupported Red Hat build of OpenJDK capabilities Ensure that you review the following deprecated and unsupported features before you install Red Hat build of OpenJDK 21: JDK Mission Control in the Red Hat build of OpenJDK 21 for Windows packages The Windows installer and the zip archive for Red Hat build of OpenJDK 21 no longer provide a distribution of JDK Mission Control (JMC). You can use the Red Hat build of Cryostat to manage JFR recordings for Java applications deployed on cloud platforms such as Red Hat OpenShift . For more information about the removal of JMC, see the Red Hat knowledge base article: Where is JDK Mission Control (JMC) in JDK 21? . Deprecate Finalization for removal This release deprecates Finalization for removal in a future release. For more information, see JEP 421: Deprecate Finalization for Removal . Prepare to disallow the dynamic loading of agents This release issues warnings when agents are loaded dynamically into a running JVM. The dynamic loading of agents will be disallowed by default in a future release. For more information, see JEP 451: Prepare to Disallow the Dynamic Loading of Agents . Note Red Hat does not provide builds of OpenJDK with 32-bit support. In OpenJDK 21, Windows 32-bit x86 support is also now deprecated upstream. This feature will be removed in a future release. For more information, see JEP 449: Deprecate the Windows 32-bit x86 Port for Removal .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.1/openjdk21-deprecated-unsupported-features
|
Chapter 1. Understanding OpenShift updates
|
Chapter 1. Understanding OpenShift updates 1.1. Introduction to OpenShift updates With OpenShift Container Platform 4, you can update an OpenShift Container Platform cluster with a single operation by using the web console or the OpenShift CLI ( oc ). Platform administrators can view new update options either by going to Administration Cluster Settings in the web console or by looking at the output of the oc adm upgrade command. Red Hat hosts a public OpenShift Update Service (OSUS), which serves a graph of update possibilities based on the OpenShift Container Platform release images in the official registry. The graph contains update information for any public OCP release. OpenShift Container Platform clusters are configured to connect to the OSUS by default, and the OSUS responds to clusters with information about known update targets. An update begins when either a cluster administrator or an automatic update controller edits the custom resource (CR) of the Cluster Version Operator (CVO) with a new version. To reconcile the cluster with the newly specified version, the CVO retrieves the target release image from an image registry and begins to apply changes to the cluster. Note Operators previously installed through Operator Lifecycle Manager (OLM) follow a different process for updates. See Updating installed Operators for more information. The target release image contains manifest files for all cluster components that form a specific OCP version. When updating the cluster to a new version, the CVO applies manifests in separate stages called Runlevels. Most, but not all, manifests support one of the cluster Operators. As the CVO applies a manifest to a cluster Operator, the Operator might perform update tasks to reconcile itself with its new specified version. The CVO monitors the state of each applied resource and the states reported by all cluster Operators. The CVO only proceeds with the update when all manifests and cluster Operators in the active Runlevel reach a stable condition. After the CVO updates the entire control plane through this process, the Machine Config Operator (MCO) updates the operating system and configuration of every node in the cluster. 1.1.1. Common questions about update availability There are several factors that affect if and when an update is made available to an OpenShift Container Platform cluster. The following list provides common questions regarding the availability of an update: What are the differences between each of the update channels? A new release is initially added to the candidate channel. After successful final testing, a release on the candidate channel is promoted to the fast channel, an errata is published, and the release is now fully supported. After a delay, a release on the fast channel is finally promoted to the stable channel. This delay represents the only difference between the fast and stable channels. Note For the latest z-stream releases, this delay may generally be a week or two. However, the delay for initial updates to the latest minor version may take much longer, generally 45-90 days. Releases promoted to the stable channel are simultaneously promoted to the eus channel. The primary purpose of the eus channel is to serve as a convenience for clusters performing a Control Plane Only update. Is a release on the stable channel safer or more supported than a release on the fast channel? If a regression is identified for a release on a fast channel, it will be resolved and managed to the same extent as if that regression was identified for a release on the stable channel. The only difference between releases on the fast and stable channels is that a release only appears on the stable channel after it has been on the fast channel for some time, which provides more time for new update risks to be discovered. A release that is available on the fast channel always becomes available on the stable channel after this delay. What does it mean if an update has known issues? Red Hat continuously evaluates data from multiple sources to determine whether updates from one version to another have any declared issues. Identified issues are typically documented in the version's release notes. Even if the update path has known issues, customers are still supported if they perform the update. Red Hat does not block users from updating to a certain version. Red Hat may declare conditional update risks, which may or may not apply to a particular cluster. Declared risks provide cluster administrators more context about a supported update. Cluster administrators can still accept the risk and update to that particular target version. What if I see that an update to a particular release is no longer recommended? If Red Hat removes update recommendations from any supported release due to a regression, a superseding update recommendation will be provided to a future version that corrects the regression. There may be a delay while the defect is corrected, tested, and promoted to your selected channel. How long until the z-stream release is made available on the fast and stable channels? While the specific cadence can vary based on a number of factors, new z-stream releases for the latest minor version are typically made available about every week. Older minor versions, which have become more stable over time, may take much longer for new z-stream releases to be made available. Important These are only estimates based on past data about z-stream releases. Red Hat reserves the right to change the release frequency as needed. Any number of issues could cause irregularities and delays in this release cadence. Once a z-stream release is published, it also appears in the fast channel for that minor version. After a delay, the z-stream release may then appear in that minor version's stable channel. Additional resources Understanding update channels and releases 1.1.2. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an update path is not recommended by the OpenShift Update Service, it might be because of a known issue related to the update path, such as incompatibility or availability. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only updating to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the machine configuration pool and marks them unavailable. By default, this value is set to 1 . The MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 1.1.3. Understanding cluster Operator condition types The status of cluster Operators includes their condition type, which informs you of the current state of your Operator's health. The following definitions cover a list of some common ClusterOperator condition types. Operators that have additional condition types and use Operator-specific language have been omitted. The Cluster Version Operator (CVO) is responsible for collecting the status conditions from cluster Operators so that cluster administrators can better understand the state of the OpenShift Container Platform cluster. Available: The condition type Available indicates that an Operator is functional and available in the cluster. If the status is False , at least one part of the operand is non-functional and the condition requires an administrator to intervene. Progressing: The condition type Progressing indicates that an Operator is actively rolling out new code, propagating configuration changes, or otherwise moving from one steady state to another. Operators do not report the condition type Progressing as True when they are reconciling a known state. If the observed cluster state has changed and the Operator is reacting to it, then the status reports back as True , since it is moving from one steady state to another. Degraded: The condition type Degraded indicates that an Operator has a current state that does not match its required state over a period of time. The period of time can vary by component, but a Degraded status represents persistent observation of an Operator's condition. As a result, an Operator does not fluctuate in and out of the Degraded state. There might be a different condition type if the transition from one state to another does not persist over a long enough period to report Degraded . An Operator does not report Degraded during the course of a normal update. An Operator may report Degraded in response to a persistent infrastructure failure that requires eventual administrator intervention. Note This condition type is only an indication that something may need investigation and adjustment. As long as the Operator is available, the Degraded condition does not cause user workload failure or application downtime. Upgradeable: The condition type Upgradeable indicates whether the Operator is safe to update based on the current cluster state. The message field contains a human-readable description of what the administrator needs to do for the cluster to successfully update. The CVO allows updates when this condition is True , Unknown or missing. When the Upgradeable status is False , only minor updates are impacted, and the CVO prevents the cluster from performing impacted updates unless forced. 1.1.4. Understanding cluster version condition types The Cluster Version Operator (CVO) monitors cluster Operators and other components, and is responsible for collecting the status of both the cluster version and its Operators. This status includes the condition type, which informs you of the health and current state of the OpenShift Container Platform cluster. In addition to Available , Progressing , and Upgradeable , there are condition types that affect cluster versions and Operators. Failing: The cluster version condition type Failing indicates that a cluster cannot reach its desired state, is unhealthy, and requires an administrator to intervene. Invalid: The cluster version condition type Invalid indicates that the cluster version has an error that prevents the server from taking action. The CVO only reconciles the current state as long as this condition is set. RetrievedUpdates: The cluster version condition type RetrievedUpdates indicates whether or not available updates have been retrieved from the upstream update server. The condition is Unknown before retrieval, False if the updates either recently failed or could not be retrieved, or True if the availableUpdates field is both recent and accurate. ReleaseAccepted: The cluster version condition type ReleaseAccepted with a True status indicates that the requested release payload was successfully loaded without failure during image verification and precondition checking. ImplicitlyEnabledCapabilities: The cluster version condition type ImplicitlyEnabledCapabilities with a True status indicates that there are enabled capabilities that the user is not currently requesting through spec.capabilities . The CVO does not support disabling capabilities if any associated resources were previously managed by the CVO. 1.1.5. Common terms Control plane The control plane , which is composed of control plane machines, manages the OpenShift Container Platform cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. Cluster Version Operator The Cluster Version Operator (CVO) starts the update process for the cluster. It checks with OSUS based on the current cluster version and retrieves the graph which contains available or possible update paths. Machine Config Operator The Machine Config Operator (MCO) is a cluster-level Operator that manages the operating system and machine configurations. Through the MCO, platform administrators can configure and update systemd, CRI-O and Kubelet, the kernel, NetworkManager, and other system features on the worker nodes. OpenShift Update Service The OpenShift Update Service (OSUS) provides over-the-air updates to OpenShift Container Platform, including to Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. Channels Channels declare an update strategy tied to minor versions of OpenShift Container Platform. The OSUS uses this configured strategy to recommend update edges consistent with that strategy. Recommended update edge A recommended update edge is a recommended update between OpenShift Container Platform releases. Whether a given update is recommended can depend on the cluster's configured channel, current version, known bugs, and other information. OSUS communicates the recommended edges to the CVO, which runs in every cluster. Additional resources Machine Config Overview Using the OpenShift Update Service in a disconnected environment Update channels 1.1.6. Additional resources How cluster updates work . 1.2. How cluster updates work The following sections describe each major aspect of the OpenShift Container Platform (OCP) update process in detail. For a general overview of how updates work, see the Introduction to OpenShift updates . 1.2.1. The Cluster Version Operator The Cluster Version Operator (CVO) is the primary component that orchestrates and facilitates the OpenShift Container Platform update process. During installation and standard cluster operation, the CVO is constantly comparing the manifests of managed cluster Operators to in-cluster resources, and reconciling discrepancies to ensure that the actual state of these resources match their desired state. 1.2.1.1. The ClusterVersion object One of the resources that the Cluster Version Operator (CVO) monitors is the ClusterVersion resource. Administrators and OpenShift components can communicate or interact with the CVO through the ClusterVersion object. The desired CVO state is declared through the ClusterVersion object and the current CVO state is reflected in the object's status. Note Do not directly modify the ClusterVersion object. Instead, use interfaces such as the oc CLI or the web console to declare your update target. The CVO continually reconciles the cluster with the target state declared in the spec property of the ClusterVersion resource. When the desired release differs from the actual release, that reconciliation updates the cluster. Update availability data The ClusterVersion resource also contains information about updates that are available to the cluster. This includes updates that are available, but not recommended due to a known risk that applies to the cluster. These updates are known as conditional updates. To learn how the CVO maintains this information about available updates in the ClusterVersion resource, see the "Evaluation of update availability" section. You can inspect all available updates with the following command: USD oc adm upgrade --include-not-recommended Note The additional --include-not-recommended parameter includes updates that are available with known issues that apply to the cluster. Example output Cluster version is 4.13.40 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.14 (available channels: candidate-4.13, candidate-4.14, eus-4.14, fast-4.13, fast-4.14, stable-4.13, stable-4.14) Recommended updates: VERSION IMAGE 4.14.27 quay.io/openshift-release-dev/ocp-release@sha256:4d30b359aa6600a89ed49ce6a9a5fdab54092bcb821a25480fdfbc47e66af9ec 4.14.26 quay.io/openshift-release-dev/ocp-release@sha256:4fe7d4ccf4d967a309f83118f1a380a656a733d7fcee1dbaf4d51752a6372890 4.14.25 quay.io/openshift-release-dev/ocp-release@sha256:a0ef946ef8ae75aef726af1d9bbaad278559ad8cab2c1ed1088928a0087990b6 4.14.24 quay.io/openshift-release-dev/ocp-release@sha256:0a34eac4b834e67f1bca94493c237e307be2c0eae7b8956d4d8ef1c0c462c7b0 4.14.23 quay.io/openshift-release-dev/ocp-release@sha256:f8465817382128ec7c0bc676174bad0fb43204c353e49c146ddd83a5b3d58d92 4.13.42 quay.io/openshift-release-dev/ocp-release@sha256:dcf5c3ad7384f8bee3c275da8f886b0bc9aea7611d166d695d0cf0fff40a0b55 4.13.41 quay.io/openshift-release-dev/ocp-release@sha256:dbb8aa0cf53dc5ac663514e259ad2768d8c82fd1fe7181a4cfb484e3ffdbd3ba Updates with known issues: Version: 4.14.22 Image: quay.io/openshift-release-dev/ocp-release@sha256:7093fa606debe63820671cc92a1384e14d0b70058d4b4719d666571e1fc62190 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 18.061ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689 Version: 4.14.21 Image: quay.io/openshift-release-dev/ocp-release@sha256:6e3fba19a1453e61f8846c6b0ad3abf41436a3550092cbfd364ad4ce194582b7 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 33.991ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689 The oc adm upgrade command queries the ClusterVersion resource for information about available updates and presents it in a human-readable format. One way to directly inspect the underlying availability data created by the CVO is by querying the ClusterVersion resource with the following command: USD oc get clusterversion version -o json | jq '.status.availableUpdates' Example output [ { "channels": [ "candidate-4.11", "candidate-4.12", "fast-4.11", "fast-4.12" ], "image": "quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775", "url": "https://access.redhat.com/errata/RHBA-2023:3213", "version": "4.11.41" }, ... ] A similar command can be used to check conditional updates: USD oc get clusterversion version -o json | jq '.status.conditionalUpdates' Example output [ { "conditions": [ { "lastTransitionTime": "2023-05-30T16:28:59Z", "message": "The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136", "reason": "PatchesOlderRelease", "status": "False", "type": "Recommended" } ], "release": { "channels": [...], "image": "quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d", "url": "https://access.redhat.com/errata/RHBA-2023:1733", "version": "4.11.36" }, "risks": [...] }, ... ] 1.2.1.2. Evaluation of update availability The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update possibilities. This data is based on the cluster's subscribed channel. The CVO then saves information about update recommendations into either the availableUpdates or conditionalUpdates field of its ClusterVersion resource. The CVO periodically checks the conditional updates for update risks. These risks are conveyed through the data served by the OSUS, which contains information for each version about known issues that might affect a cluster updated to that version. Most risks are limited to clusters with specific characteristics, such as clusters with a certain size or clusters that are deployed in a particular cloud platform. The CVO continuously evaluates its cluster characteristics against the conditional risk information for each conditional update. If the CVO finds that the cluster matches the criteria, the CVO stores this information in the conditionalUpdates field of its ClusterVersion resource. If the CVO finds that the cluster does not match the risks of an update, or that there are no risks associated with the update, it stores the target version in the availableUpdates field of its ClusterVersion resource. The user interface, either the web console or the OpenShift CLI ( oc ), presents this information in sectioned headings to the administrator. Each known issue associated with the update path contains a link to further resources about the risk so that the administrator can make an informed decision about the update. Additional resources Update recommendation removals and Conditional Updates 1.2.2. Release images A release image is the delivery mechanism for a specific OpenShift Container Platform (OCP) version. It contains the release metadata, a Cluster Version Operator (CVO) binary matching the release version, every manifest needed to deploy individual OpenShift cluster Operators, and a list of SHA digest-versioned references to all container images that make up this OpenShift version. You can inspect the content of a specific release image by running the following command: USD oc adm release extract <release image> Example output USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z USD ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 ... 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata 1 Manifest for ClusterResourceQuota CRD, to be applied on Runlevel 03 2 Manifest for PrometheusRoleBinding resource for the service-ca-operator , to be applied on Runlevel 90 3 List of SHA digest-versioned references to all required images 1.2.3. Update process workflow The following steps represent a detailed workflow of the OpenShift Container Platform (OCP) update process: The target version is stored in the spec.desiredUpdate.version field of the ClusterVersion resource, which may be managed through the web console or the CLI. The Cluster Version Operator (CVO) detects that the desiredUpdate in the ClusterVersion resource differs from the current cluster version. Using graph data from the OpenShift Update Service, the CVO resolves the desired cluster version to a pull spec for the release image. The CVO validates the integrity and authenticity of the release image. Red Hat publishes cryptographically-signed statements about published release images at predefined locations by using image SHA digests as unique and immutable release image identifiers. The CVO utilizes a list of built-in public keys to validate the presence and signatures of the statement matching the checked release image. The CVO creates a job named version-USDversion-USDhash in the openshift-cluster-version namespace. This job uses containers that are executing the release image, so the cluster downloads the image through the container runtime. The job then extracts the manifests and metadata from the release image to a shared volume that is accessible to the CVO. The CVO validates the extracted manifests and metadata. The CVO checks some preconditions to ensure that no problematic condition is detected in the cluster. Certain conditions can prevent updates from proceeding. These conditions are either determined by the CVO itself, or reported by individual cluster Operators that detect some details about the cluster that the Operator considers problematic for the update. The CVO records the accepted release in status.desired and creates a status.history entry about the new update. The CVO begins reconciling the manifests from the release image. Cluster Operators are updated in separate stages called Runlevels, and the CVO ensures that all Operators in a Runlevel finish updating before it proceeds to the level. Manifests for the CVO itself are applied early in the process. When the CVO deployment is applied, the current CVO pod stops, and a CVO pod that uses the new version starts. The new CVO proceeds to reconcile the remaining manifests. The update proceeds until the entire control plane is updated to the new version. Individual cluster Operators might perform update tasks on their domain of the cluster, and while they do so, they report their state through the Progressing=True condition. The Machine Config Operator (MCO) manifests are applied towards the end of the process. The updated MCO then begins updating the system configuration and operating system of every node. Each node might be drained, updated, and rebooted before it starts to accept workloads again. The cluster reports as updated after the control plane update is finished, usually before all nodes are updated. After the update, the CVO maintains all cluster resources to match the state delivered in the release image. 1.2.4. Understanding how manifests are applied during an update Some manifests supplied in a release image must be applied in a certain order because of the dependencies between them. For example, the CustomResourceDefinition resource must be created before the matching custom resources. Additionally, there is a logical order in which the individual cluster Operators must be updated to minimize disruption in the cluster. The Cluster Version Operator (CVO) implements this logical order through the concept of Runlevels. These dependencies are encoded in the filenames of the manifests in the release image: 0000_<runlevel>_<component>_<manifest-name>.yaml For example: 0000_03_config-operator_01_proxy.crd.yaml The CVO internally builds a dependency graph for the manifests, where the CVO obeys the following rules: During an update, manifests at a lower Runlevel are applied before those at a higher Runlevel. Within one Runlevel, manifests for different components can be applied in parallel. Within one Runlevel, manifests for a single component are applied in lexicographic order. The CVO then applies manifests following the generated dependency graph. Note For some resource types, the CVO monitors the resource after its manifest is applied, and considers it to be successfully updated only after the resource reaches a stable state. Achieving this state can take some time. This is especially true for ClusterOperator resources, while the CVO waits for a cluster Operator to update itself and then update its ClusterOperator status. The CVO waits until all cluster Operators in the Runlevel meet the following conditions before it proceeds to the Runlevel: The cluster Operators have an Available=True condition. The cluster Operators have a Degraded=False condition. The cluster Operators declare they have achieved the desired version in their ClusterOperator resource. Some actions can take significant time to finish. The CVO waits for the actions to complete in order to ensure the subsequent Runlevels can proceed safely. Initially reconciling the new release's manifests is expected to take 60 to 120 minutes in total; see Understanding OpenShift Container Platform update duration for more information about factors that influence update duration. In the example diagram, the CVO is waiting until all work is completed at Runlevel 20. The CVO has applied all manifests to the Operators in the Runlevel, but the kube-apiserver-operator ClusterOperator performs some actions after its new version was deployed. The kube-apiserver-operator ClusterOperator declares this progress through the Progressing=True condition and by not declaring the new version as reconciled in its status.versions . The CVO waits until the ClusterOperator reports an acceptable status, and then it will start reconciling manifests at Runlevel 25. Additional resources Understanding OpenShift Container Platform update duration 1.2.5. Understanding how the Machine Config Operator updates nodes The Machine Config Operator (MCO) applies a new machine configuration to each control plane node and compute node. During the machine configuration update, control plane nodes and compute nodes are organized into their own machine config pools, where the pools of machines are updated in parallel. The .spec.maxUnavailable parameter, which has a default value of 1 , determines how many nodes in a machine config pool can simultaneously undergo the update process. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. When the machine configuration update process begins, the MCO checks the amount of currently unavailable nodes in a pool. If there are fewer unavailable nodes than the value of .spec.maxUnavailable , the MCO initiates the following sequence of actions on available nodes in the pool: Cordon and drain the node Note When a node is cordoned, workloads cannot be scheduled to it. Update the system configuration and operating system (OS) of the node Reboot the node Uncordon the node A node undergoing this process is unavailable until it is uncordoned and workloads can be scheduled to it again. The MCO begins updating nodes until the number of unavailable nodes is equal to the value of .spec.maxUnavailable . As a node completes its update and becomes available, the number of unavailable nodes in the machine config pool is once again fewer than .spec.maxUnavailable . If there are remaining nodes that need to be updated, the MCO initiates the update process on a node until the .spec.maxUnavailable limit is once again reached. This process repeats until each control plane node and compute node has been updated. The following example workflow describes how this process might occur in a machine config pool with 5 nodes, where .spec.maxUnavailable is 3 and all nodes are initially available: The MCO cordons nodes 1, 2, and 3, and begins to drain them. Node 2 finishes draining, reboots, and becomes available again. The MCO cordons node 4 and begins draining it. Node 1 finishes draining, reboots, and becomes available again. The MCO cordons node 5 and begins draining it. Node 3 finishes draining, reboots, and becomes available again. Node 5 finishes draining, reboots, and becomes available again. Node 4 finishes draining, reboots, and becomes available again. Because the update process for each node is independent of other nodes, some nodes in the example above finish their update out of the order in which they were cordoned by the MCO. You can check the status of the machine configuration update by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h Additional resources Machine Config Overview 1.3. Understanding update channels and releases Update channels are the mechanism by which users declare the OpenShift Container Platform minor version they intend to update their clusters to. They also allow users to choose the timing and level of support their updates will have through the fast , stable , candidate , and eus channel options. The Cluster Version Operator uses an update graph based on the channel declaration, along with other conditional information, to provide a list of recommended and conditional updates available to the cluster. Update channels correspond to a minor version of OpenShift Container Platform. The version number in the channel represents the target minor version that the cluster will eventually be updated to, even if it is higher than the cluster's current minor version. For instance, OpenShift Container Platform 4.10 update channels provide the following recommendations: Updates within 4.10. Updates within 4.9. Updates from 4.9 to 4.10, allowing all 4.9 clusters to eventually update to 4.10, even if they do not immediately meet the minimum z-stream version requirements. eus-4.10 only: updates within 4.8. eus-4.10 only: updates from 4.8 to 4.9 to 4.10, allowing all 4.8 clusters to eventually update to 4.10. 4.10 update channels do not recommend updates to 4.11 or later releases. This strategy ensures that administrators must explicitly decide to update to the minor version of OpenShift Container Platform. Update channels control only release selection and do not impact the version of the cluster that you install. The openshift-install binary file for a specific version of OpenShift Container Platform always installs that version. OpenShift Container Platform 4.17 offers the following update channels: stable-4.17 eus-4.y (only offered for EUS versions and meant to facilitate updates between EUS versions) fast-4.17 candidate-4.17 If you do not want the Cluster Version Operator to fetch available updates from the update recommendation service, you can use the oc adm upgrade channel command in the OpenShift CLI to configure an empty channel. This configuration can be helpful if, for example, a cluster has restricted network access and there is no local, reachable update recommendation service. Warning Red Hat recommends updating to versions suggested by OpenShift Update Service only. For a minor version update, versions must be contiguous. Red Hat does not test updates to noncontiguous versions and cannot guarantee compatibility with earlier versions. 1.3.1. Update channels 1.3.1.1. fast-4.17 channel The fast-4.17 channel is updated with new versions of OpenShift Container Platform 4.17 as soon as Red Hat declares the version as a general availability (GA) release. As such, these releases are fully supported and purposed to be used in production environments. 1.3.1.2. stable-4.17 channel While the fast-4.17 channel contains releases as soon as their errata are published, releases are added to the stable-4.17 channel after a delay. During this delay, data is collected from multiple sources and analyzed for indications of product regressions. Once a significant number of data points have been collected, these releases are added to the stable channel. Note Since the time required to obtain a significant number of data points varies based on many factors, Service LeveL Objective (SLO) is not offered for the delay duration between fast and stable channels. For more information, please see "Choosing the correct channel for your cluster" Newly installed clusters default to using stable channels. 1.3.1.3. eus-4.y channel In addition to the stable channel, all even-numbered minor versions of OpenShift Container Platform offer Extended Update Support (EUS). Releases promoted to the stable channel are also simultaneously promoted to the EUS channels. The primary purpose of the EUS channels is to serve as a convenience for clusters performing a Control Plane Only update. Note Both standard and non-EUS subscribers can access all EUS repositories and necessary RPMs ( rhel-*-eus-rpms ) to be able to support critical purposes such as debugging and building drivers. 1.3.1.4. candidate-4.17 channel The candidate-4.17 channel offers unsupported early access to releases as soon as they are built. Releases present only in candidate channels may not contain the full feature set of eventual GA releases or features may be removed prior to GA. Additionally, these releases have not been subject to full Red Hat Quality Assurance and may not offer update paths to later GA releases. Given these caveats, the candidate channel is only suitable for testing purposes where destroying and recreating a cluster is acceptable. 1.3.1.5. Update recommendations in the channel OpenShift Container Platform maintains an update recommendation service that knows your installed OpenShift Container Platform version and the path to take within the channel to get you to the release. Update paths are also limited to versions relevant to your currently selected channel and its promotion characteristics. You can imagine seeing the following releases in your channel: 4.17.0 4.17.1 4.17.3 4.17.4 The service recommends only updates that have been tested and have no known serious regressions. For example, if your cluster is on 4.17.1 and OpenShift Container Platform suggests 4.17.4, then it is recommended to update from 4.17.1 to 4.17.4. Important Do not rely on consecutive patch numbers. In this example, 4.17.2 is not and never was available in the channel, therefore updates to 4.17.2 are not recommended or supported. 1.3.1.6. Update recommendations and Conditional Updates Red Hat monitors newly released versions and update paths associated with those versions before and after they are added to supported channels. If Red Hat removes update recommendations from any supported release, a superseding update recommendation will be provided to a future version that corrects the regression. There may however be a delay while the defect is corrected, tested, and promoted to your selected channel. Beginning in OpenShift Container Platform 4.10, when update risks are confirmed, they are declared as Conditional Update risks for the relevant updates. Each known risk may apply to all clusters or only clusters matching certain conditions. Some examples include having the Platform set to None or the CNI provider set to OpenShiftSDN . The Cluster Version Operator (CVO) continually evaluates known risks against the current cluster state. If no risks match, the update is recommended. If the risk matches, those update paths are labeled as updates with known issues , and a reference link to the known issues is provided. The reference link helps the cluster admin decide if they want to accept the risk and continue to update their cluster. When Red Hat chooses to declare Conditional Update risks, that action is taken in all relevant channels simultaneously. Declaration of a Conditional Update risk may happen either before or after the update has been promoted to supported channels. 1.3.1.7. Choosing the correct channel for your cluster Choosing the appropriate channel involves two decisions. First, select the minor version you want for your cluster update. Selecting a channel which matches your current version ensures that you only apply z-stream updates and do not receive feature updates. Selecting an available channel which has a version greater than your current version will ensure that after one or more updates your cluster will have updated to that version. Your cluster will only be offered channels which match its current version, the version, or the EUS version. Note Due to the complexity involved in planning updates between versions many minors apart, channels that assist in planning updates beyond a single Control Plane Only update are not offered. Second, you should choose your desired rollout strategy. You may choose to update as soon as Red Hat declares a release GA by selecting from fast channels or you may want to wait for Red Hat to promote releases to the stable channel. Update recommendations offered in the fast-4.17 and stable-4.17 are both fully supported and benefit equally from ongoing data analysis. The promotion delay before promoting a release to the stable channel represents the only difference between the two channels. Updates to the latest z-streams are generally promoted to the stable channel within a week or two, however the delay when initially rolling out updates to the latest minor is much longer, generally 45-90 days. Please consider the promotion delay when choosing your desired channel, as waiting for promotion to the stable channel may affect your scheduling plans. Additionally, there are several factors which may lead an organization to move clusters to the fast channel either permanently or temporarily including: The desire to apply a specific fix known to affect your environment without delay. Application of CVE fixes without delay. CVE fixes may introduce regressions, so promotion delays still apply to z-streams with CVE fixes. Internal testing processes. If it takes your organization several weeks to qualify releases it is best test concurrently with our promotion process rather than waiting. This also assures that any telemetry signal provided to Red Hat is a factored into our rollout, so issues relevant to you can be fixed faster. 1.3.1.8. Restricted network clusters If you manage the container images for your OpenShift Container Platform clusters yourself, you must consult the Red Hat errata that is associated with product releases and note any comments that impact updates. During an update, the user interface might warn you about switching between these versions, so you must ensure that you selected an appropriate version before you bypass those warnings. 1.3.1.9. Switching between channels A channel can be switched from the web console or through the adm upgrade channel command: USD oc adm upgrade channel <channel> The web console will display an alert if you switch to a channel that does not include the current release. The web console does not recommend any updates while on a channel without the current release. You can return to the original channel at any point, however. Changing your channel might impact the supportability of your cluster. The following conditions might apply: Your cluster is still supported if you change from the stable-4.17 channel to the fast-4.17 channel. You can switch to the candidate-4.17 channel at any time, but some releases for this channel might be unsupported. You can switch from the candidate-4.17 channel to the fast-4.17 channel if your current release is a general availability release. You can always switch from the fast-4.17 channel to the stable-4.17 channel. There is a possible delay of up to a day for the release to be promoted to stable-4.17 if the current release was recently promoted. Additional resources Updating along a conditional upgrade path Choosing the correct channel for your cluster 1.4. Understanding OpenShift Container Platform update duration OpenShift Container Platform update duration varies based on the deployment topology. This page helps you understand the factors that affect update duration and estimate how long the cluster update takes in your environment. 1.4.1. Factors affecting update duration The following factors can affect your cluster update duration: The reboot of compute nodes to the new machine configuration by Machine Config Operator (MCO) The value of MaxUnavailable in the machine config pool Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. The minimum number or percentages of replicas set in pod disruption budget (PDB) The number of nodes in the cluster The health of the cluster nodes 1.4.2. Cluster update phases In OpenShift Container Platform, the cluster update happens in two phases: Cluster Version Operator (CVO) target update payload deployment Machine Config Operator (MCO) node updates 1.4.2.1. Cluster Version Operator target update payload deployment The Cluster Version Operator (CVO) retrieves the target update release image and applies to the cluster. All components which run as pods are updated during this phase, whereas the host components are updated by the Machine Config Operator (MCO). This process might take 60 to 120 minutes. Note The CVO phase of the update does not restart the nodes. 1.4.2.2. Machine Config Operator node updates The Machine Config Operator (MCO) applies a new machine configuration to each control plane and compute node. During this process, the MCO performs the following sequential actions on each node of the cluster: Cordon and drain all the nodes Update the operating system (OS) Reboot the nodes Uncordon all nodes and schedule workloads on the node Note When a node is cordoned, workloads cannot be scheduled to it. The time to complete this process depends on several factors including the node and infrastructure configuration. This process might take 5 or more minutes to complete per node. In addition to MCO, you should consider the impact of the following parameters: The control plane node update duration is predictable and oftentimes shorter than compute nodes, because the control plane workloads are tuned for graceful updates and quick drains. You can update the compute nodes in parallel by setting the maxUnavailable field to greater than 1 in the Machine Config Pool (MCP). The MCO cordons the number of nodes specified in maxUnavailable and marks them unavailable for update. When you increase maxUnavailable on the MCP, it can help the pool to update more quickly. However, if maxUnavailable is set too high, and several nodes are cordoned simultaneously, the pod disruption budget (PDB) guarded workloads could fail to drain because a schedulable node cannot be found to run the replicas. If you increase maxUnavailable for the MCP, ensure that you still have sufficient schedulable nodes to allow PDB guarded workloads to drain. Before you begin the update, you must ensure that all the nodes are available. Any unavailable nodes can significantly impact the update duration because the node unavailability affects the maxUnavailable and pod disruption budgets. To check the status of nodes from the terminal, run the following command: USD oc get node Example Output NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb If the status of the node is NotReady or SchedulingDisabled , then the node is not available and this impacts the update duration. You can check the status of nodes from the Administrator perspective in the web console by expanding Compute Nodes . Additional resources Machine Config Overview Pod disruption budget 1.4.2.3. Example update duration of cluster Operators The diagram shows an example of the time that cluster Operators might take to update to their new versions. The example is based on a three-node AWS OVN cluster, which has a healthy compute MachineConfigPool and no workloads that take long to drain, updating from 4.13 to 4.14. Note The specific update duration of a cluster and its Operators can vary based on several cluster characteristics, such as the target version, the amount of nodes, and the types of workloads scheduled to the nodes. Some Operators, such as the Cluster Version Operator, update themselves in a short amount of time. These Operators have either been omitted from the diagram or are included in the broader group of Operators labeled "Other Operators in parallel". Each cluster Operator has characteristics that affect the time it takes to update itself. For instance, the Kube API Server Operator in this example took more than eleven minutes to update because kube-apiserver provides graceful termination support, meaning that existing, in-flight requests are allowed to complete gracefully. This might result in a longer shutdown of the kube-apiserver . In the case of this Operator, update speed is sacrificed to help prevent and limit disruptions to cluster functionality during an update. Another characteristic that affects the update duration of an Operator is whether the Operator utilizes DaemonSets. The Network and DNS Operators utilize full-cluster DaemonSets, which can take time to roll out their version changes, and this is one of several reasons why these Operators might take longer to update themselves. The update duration for some Operators is heavily dependent on characteristics of the cluster itself. For instance, the Machine Config Operator update applies machine configuration changes to each node in the cluster. A cluster with many nodes has a longer update duration for the Machine Config Operator compared to a cluster with fewer nodes. Note Each cluster Operator is assigned a stage during which it can be updated. Operators within the same stage can update simultaneously, and Operators in a given stage cannot begin updating until all stages have been completed. For more information, see "Understanding how manifests are applied during an update" in the "Additional resources" section. Additional resources Introduction to OpenShift updates Understanding how manifests are applied during an update 1.4.3. Estimating cluster update time Historical update duration of similar clusters provides you the best estimate for the future cluster updates. However, if the historical data is not available, you can use the following convention to estimate your cluster update time: A node update iteration consists of one or more nodes updated in parallel. The control plane nodes are always updated in parallel with the compute nodes. In addition, one or more compute nodes can be updated in parallel based on the maxUnavailable value. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. For example, to estimate the update time, consider an OpenShift Container Platform cluster with three control plane nodes and six compute nodes and each host takes about 5 minutes to reboot. Note The time it takes to reboot a particular node varies significantly. In cloud instances, the reboot might take about 1 to 2 minutes, whereas in physical bare metal hosts the reboot might take more than 15 minutes. Scenario-1 When you set maxUnavailable to 1 for both the control plane and compute nodes Machine Config Pool (MCP), then all the six compute nodes will update one after another in each iteration: Scenario-2 When you set maxUnavailable to 2 for the compute node MCP, then two compute nodes will update in parallel in each iteration. Therefore it takes total three iterations to update all the nodes. Important The default setting for maxUnavailable is 1 for all the MCPs in OpenShift Container Platform. It is recommended that you do not change the maxUnavailable in the control plane MCP. 1.4.4. Red Hat Enterprise Linux (RHEL) compute nodes Red Hat Enterprise Linux (RHEL) compute nodes require an additional usage of openshift-ansible to update node binary components. The actual time spent updating RHEL compute nodes should not be significantly different from Red Hat Enterprise Linux CoreOS (RHCOS) compute nodes. Additional resources Updating RHEL compute machines 1.4.5. Additional resources OpenShift Container Platform architecture OpenShift Container Platform updates
|
[
"oc adm upgrade --include-not-recommended",
"Cluster version is 4.13.40 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.14 (available channels: candidate-4.13, candidate-4.14, eus-4.14, fast-4.13, fast-4.14, stable-4.13, stable-4.14) Recommended updates: VERSION IMAGE 4.14.27 quay.io/openshift-release-dev/ocp-release@sha256:4d30b359aa6600a89ed49ce6a9a5fdab54092bcb821a25480fdfbc47e66af9ec 4.14.26 quay.io/openshift-release-dev/ocp-release@sha256:4fe7d4ccf4d967a309f83118f1a380a656a733d7fcee1dbaf4d51752a6372890 4.14.25 quay.io/openshift-release-dev/ocp-release@sha256:a0ef946ef8ae75aef726af1d9bbaad278559ad8cab2c1ed1088928a0087990b6 4.14.24 quay.io/openshift-release-dev/ocp-release@sha256:0a34eac4b834e67f1bca94493c237e307be2c0eae7b8956d4d8ef1c0c462c7b0 4.14.23 quay.io/openshift-release-dev/ocp-release@sha256:f8465817382128ec7c0bc676174bad0fb43204c353e49c146ddd83a5b3d58d92 4.13.42 quay.io/openshift-release-dev/ocp-release@sha256:dcf5c3ad7384f8bee3c275da8f886b0bc9aea7611d166d695d0cf0fff40a0b55 4.13.41 quay.io/openshift-release-dev/ocp-release@sha256:dbb8aa0cf53dc5ac663514e259ad2768d8c82fd1fe7181a4cfb484e3ffdbd3ba Updates with known issues: Version: 4.14.22 Image: quay.io/openshift-release-dev/ocp-release@sha256:7093fa606debe63820671cc92a1384e14d0b70058d4b4719d666571e1fc62190 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 18.061ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689 Version: 4.14.21 Image: quay.io/openshift-release-dev/ocp-release@sha256:6e3fba19a1453e61f8846c6b0ad3abf41436a3550092cbfd364ad4ce194582b7 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 33.991ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689",
"oc get clusterversion version -o json | jq '.status.availableUpdates'",
"[ { \"channels\": [ \"candidate-4.11\", \"candidate-4.12\", \"fast-4.11\", \"fast-4.12\" ], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:3213\", \"version\": \"4.11.41\" }, ]",
"oc get clusterversion version -o json | jq '.status.conditionalUpdates'",
"[ { \"conditions\": [ { \"lastTransitionTime\": \"2023-05-30T16:28:59Z\", \"message\": \"The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136\", \"reason\": \"PatchesOlderRelease\", \"status\": \"False\", \"type\": \"Recommended\" } ], \"release\": { \"channels\": [...], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:1733\", \"version\": \"4.11.36\" }, \"risks\": [...] }, ]",
"oc adm release extract <release image>",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata",
"0000_<runlevel>_<component>_<manifest-name>.yaml",
"0000_03_config-operator_01_proxy.crd.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"oc adm upgrade channel <channel>",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb",
"Cluster update time = CVO target update payload deployment time + (# node update iterations x MCO node update time)",
"Cluster update time = 60 + (6 x 5) = 90 minutes",
"Cluster update time = 60 + (3 x 5) = 75 minutes"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/updating_clusters/understanding-openshift-updates-1
|
Chapter 4. Adding Red Hat integrations to the Hybrid Cloud Console
|
Chapter 4. Adding Red Hat integrations to the Hybrid Cloud Console You can connect your Red Hat OpenShift Container Platform environment to the Red Hat Hybrid Cloud Console as a cloud integration, so that the cost management service on the Hybrid Cloud Console can use data from your environment to track your cloud costs. You can use the cost management service to perform financially related tasks, such as: Visualizing, understanding, and analyzing the use of resources and costs Forecasting your future consumption and comparing them with budgets Optimizing resources and consumption Identifying patterns of usage for further analysis Integrating with third-party tools that can benefit from cost and resourcing data Note For Red Hat OpenShift Container Platform 4.6 and later, install the costmanagement-metrics-operator from the OpenShift Container Platform web console. For more information, see Integrating OpenShift Container Platform data into cost management . 4.1. Adding an OpenShift Container Platform integration You can connect your Red Hat OpenShift Container Platform environment to the Red Hat Hybrid Cloud Console as an integration so that you can use OpenShift Container Platform data with cost management. After adding the integration, you can view and manage your OpenShift Container Platform and other integrations from the Integrations page in the Hybrid Cloud Console. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console as an Organization Administrator or as a user with Cloud Administrator permissions. You have access to an OpenShift Container Platform environment that you want to use with the Hybrid Cloud Console. Procedure Go to Settings > Integrations . Select the Red Hat tab. Click Add integration to open the integrations wizard. If this is the first integration you are adding, skip this step. Select Red Hat OpenShift Container Platform , and then click . Enter a descriptive name for the integration, for example, my_ocp_integration , and then click . Select Cost Management as the application, and then click . To install and configure the costmanagement-metrics-operator , use the steps in the wizard, and then click . Refer to Integrating OpenShift Container Platform data into cost management for additional information. Enter the Cluster Identifier , and then click . Review the integration details, and then click Add to finish adding the integration. Verification Go to the Integrations page, and select the Red Hat tab. Confirm that your OpenShift Container Platform integration is listed and the status is Ready .
| null |
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_cloud_integrations_for_red_hat_services/redhat-cloud-integrations_crc-cloud-integrations
|
Chapter 6. Using your subscription
|
Chapter 6. Using your subscription Red Hat Service Interconnect is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. 6.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. 6.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. 6.3. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions Red Hat Enterprise Linux 9 - Registering the system and managing subscriptions
| null |
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/installation/using_your_subscription
|
Chapter 6. Customizing the Object Storage service (swift)
|
Chapter 6. Customizing the Object Storage service (swift) You can customize some of the default settings of the Object Storage service (swift) to optimize deployment performance. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. 6.1. Changing default parameters for object storage You can customize the following options for your Object Storage service (swift) deployment in the OpenStackControlPlane CR: Table 6.1. OpenStackControlPlane CR options Option Description swiftProxy/ceilometerEnabled Enables the Ceilometer middleware in the proxy server. swiftProxy/encryptionEnabled Enables object encryption by using the Key Manager service (barbican). swiftRing/minPartHours Sets the minimum time in hours before a partition in a ring can be moved following a rebalance. swiftRing/partPower Sets the partition power to use when building Object Storage rings. swiftRing/ringReplicas Sets the number of object replicas to use in the Object Storage rings. You can customize the following configuration files for Object Storage services by using the defaultConfigOverwrite parameter and keys in the OpenStackControlPlane CR: Table 6.2. Configuration file options Service Key account-server 01-account-server.conf container-server 01-container-server.conf object-server 01-object-server.conf object-expirer 01-object-expirer.conf proxy-server 01-proxy-server.conf Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml , and enable Ceilometer middleware and object encryption under the swiftProxy parameter in the swift template: Add values for minPartHours , partPower , and ringReplicas under the swiftRing parameter: Replace <number_of_hours> with the minimum time in hours before you want a partition in a ring to be moved following a rebalance. Replace <partition_power> with the partition power you want to use when building Object Storage rings, for example, 12 . Replace <number_of_copies> with the number of object copies you want in your cluster. Change the number of workers in the object-server service by adding the defaultConfigOverwrite parameter under the swiftStorage parameter: Replace <number_of_workers> with the number of workers you want in the object-server service. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 6.2. Custom rings You can create custom rings to update existing Object Storage service (swift) clusters. When you add new nodes to a cluster, their characteristics might differ from those of the original nodes. Without custom adjustments, the larger capacity of the new nodes may be underused, or if weights change in the rings, data dispersion can become uneven, which reduces safety. The ring builder helps manage Object Storage as clusters grow and technologies evolve. For assistance with creating custom rings, contact Red Hat Support. 6.3. Checking cluster health The Object Storage service (swift) runs many processes in the background to ensure long-term data availability, durability, and persistence. For example: Auditors constantly re-read database and object files and compare them by using checksums to make sure there is no silent bit-rot. Any database or object file that no longer matches its checksum is quarantined and becomes unreadable on that node. The replicators then copy one of the other replicas to make the local copy available again. Objects and files can disappear when you replace disks or nodes or when objects are quarantined. When this happens, replicators copy missing objects or database files to one of the other nodes. The Object Storage service includes a tool called swift-recon that collects data from all nodes and checks the overall cluster health. You can use the swift-recon command line utility to obtain metrics from the account, container, and object servers. Procedure Log in to one of the Controller nodes. Run the following command: Optional: Use the --all option to return additional output. This command queries all servers on the ring for the following data: Async pendings: If the cluster load is too high and processes cannot update database files fast enough, some updates occur asynchronously. These numbers decrease over time. Replication metrics: Review the replication timestamps; full replication passes happen frequently with few errors. An old entry, for example, an entry with a timestamp from six months ago, indicates that replication on the node has not completed in the last six months. Ring md5sums: This ensures that all ring files are consistent on all nodes. swift.conf md5sums: This ensures that all configuration files are consistent on all nodes. Quarantined files: There must be no, or very few, quarantined files for all nodes. Time-sync: All nodes must be synchronized.
|
[
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: swift: enabled: true template: swiftProxy: ceilometerEnabled: true encryptionEnabled: true replicas: 2",
"spec: swift: enabled: true template: swiftProxy: swiftRing: minPartHours: <number_of_hours> partPower: <partition_power> ringReplicas: <number_of_copies>",
"spec: swift: enabled: true template: swiftProxy: swiftRing: swiftStorage: replicas: 3 storageClass: local-storage storageRequest: 10Gi defaultConfigOverwrite: 01-object-server.conf: | [DEFAULT] workers = <number_of_workers>",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack",
"oc debug --keep-labels=true job/swift-ring-rebalance -- /bin/sh -c 'swift-ring-tool get && swift-recon -arqlT --md5'"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/customizing_persistent_storage/assembly_swift-customizing-swift_configuring-glance
|
Chapter 2. Deleting or updating Kustomize manifest resources
|
Chapter 2. Deleting or updating Kustomize manifest resources MicroShift supports the deletion of manifest resources in the following situations: Manifest removal: Manifests can be removed when you need to completely remove a resource from the cluster. Manifest upgrade: During an application upgrade, some resources might need to be removed while others are retained to preserve data. When creating new manifests, you can use manifest resource deletion to remove or update old objects, ensuring there are no conflicts or issues. Important Manifest files placed in the delete subdirectories are not automatically removed and require manual deletion. Only the resources listed in the manifest files placed in the delete subdirectories are deleted. 2.1. How manifest deletion works By default, MicroShift searches for deletion manifests in the delete subdirectories within the manifests path. When a user places a manifest in these subdirectories, MicroShift removes the manifests when the system is started. Read through the following to understand how manifests deletion works in MicroShift. Each time the system starts, before applying the manifests, MicroShift scans the following delete subdirectories within the configured manifests directory to identify the manifests that need to be deleted: /usr/lib/microshift/manifests/delete /usr/lib/microshift/manifests.d/delete/* /etc/microshift/manifests/delete /etc/microshift/manifests.d/delete/* MicroShift deletes the resources defined in the manifests found in the delete directories by running the equivalent of the kubectl delete --ignore-not-found -k command. 2.2. Use cases for manifest resource deletion The following sections explain the use case in which the manifest resource deletion is used. 2.2.1. Removing manifests for RPM systems Use the following procedure in the data removal scenario for RPM systems to completely delete the resource defined in the manifests. Procedure Identify the manifest that needs to be placed in the delete subdirectories. Create the delete subdirectory in which the manifest will be placed by running the following command: USD sudo mkdir -p <path_of_delete_directory> 1 1 Replace <path_of_delete_directory> with one of the following valid directory paths: /etc/microshift/manifests.d/delete , /etc/microshift/manifests/delete/ , /usr/lib/microshift/manifests.d/delete , or /usr/lib/microshift/manifests/delete . Move the manifest file into one of the delete subdirectories under the configured manifests directory by running the following command: USD [sudo] mv <path_of_manifests> <path_of_delete_directory> where: <path_of_manifests> :: Specifies the path of the manifest to be deleted, for example /etc/microshift/manifests.d/010-SOME-MANIFEST . <path_of_delete_directory> :: Specifies one of the following valid directory paths: /etc/microshift/manifests.d/delete , /etc/microshift/manifests/delete , /usr/lib/microshift/manifests.d/delete or /usr/lib/microshift/manifests/delete . Restart MicroShift by running the following command: USD sudo systemctl restart microshift MicroShift detects and removes the resource after the manifest file is placed in the delete subdirectories. 2.2.2. Removing manifests for OSTree systems Use the following procedure to completely delete the resource defined in the manifests. Important For OSTree installation, the delete subdirectories are read-only. Procedure Identify the manifest that needs to be placed in the delete subdirectories. Package the manifest into an RPM. See Building the RPM package for the application for the procedure to package the manifest into an RPM. Add the packaged RPM to the blueprint file to install it into correct location. See Adding application RPMs to a blueprint for the procedure to add an RPM to a blueprint. 2.2.3. Upgrading manifests for RPM systems Use the following procedure to remove some resources while retaining others to preserve data. Procedure Identify the manifest that requires updating. Create new manifests to be applied in the manifest directories. Create new manifests for resource deletion. It is not necessary to include the spec in these manifests. See Using manifests example to create new manifests using the example. Use the procedure in "Removing manifests for RPM systems" to create delete subdirectories and place the manifests created for resource deletion in this path. 2.2.4. Upgrading manifests for OSTree systems Use the following procedure to remove some resources while retaining others to preserve data. Important For OSTree systems, the delete subdirectories are read-only. Procedure Identify the manifest that needs updating. Create a new manifest to apply in the manifest directories. See Using manifests example to create new manifests using the example. Create a new manifest for resource deletion to be placed in the delete subdirectories. Use the procedure in "Removing manifests for OSTree systems" to remove the manifests. 2.3. Additional resources Using Kustomize manifests to deploy applications
|
[
"sudo mkdir -p <path_of_delete_directory> 1",
"[sudo] mv <path_of_manifests> <path_of_delete_directory>",
"sudo systemctl restart microshift"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/running_applications/microshift-deleting-resource-manifests
|
3.3. Basic SystemTap Handler Constructs
|
3.3. Basic SystemTap Handler Constructs SystemTap supports the use of several basic constructs in handlers. The syntax for most of these handler constructs are mostly based on C and awk syntax. This section describes several of the most useful SystemTap handler constructs, which should provide you with enough information to write simple yet useful SystemTap scripts. 3.3.1. Variables Variables can be used freely throughout a handler; simply choose a name, assign a value from a function or expression to it, and use it in an expression. SystemTap automatically identifies whether a variable should be typed as a string or integer, based on the type of the values assigned to it. For instance, if you set the variable var to gettimeofday_s() (as in var = gettimeofday_s() ), then var is typed as a number and can be printed in a printf() with the integer format specifier ( %d ). Note, however, that by default variables are only local to the probe they are used in. This means that variables are initialized, used and disposed at each probe handler invocation. To share a variable between probes, declare the variable name using global outside of the probes. Consider the following example: Example 3.8. timer-jiffies.stp global count_jiffies, count_ms probe timer.jiffies(100) { count_jiffies ++ } probe timer.ms(100) { count_ms ++ } probe timer.ms(12345) { hz=(1000*count_jiffies) / count_ms printf ("jiffies:ms ratio %d:%d => CONFIG_HZ=%d\n", count_jiffies, count_ms, hz) exit () } Example 3.8, "timer-jiffies.stp" computes the CONFIG_HZ setting of the kernel using timers that count jiffies and milliseconds, then computing accordingly. The global statement allows the script to use the variables count_jiffies and count_ms (set in their own respective probes) to be shared with probe timer.ms(12345) . Note The ++ notation in Example 3.8, "timer-jiffies.stp" ( count_jiffies ++ and count_ms ++ ) is used to increment the value of a variable by 1. In the following probe, count_jiffies is incremented by 1 every 100 jiffies: probe timer.jiffies(100) { count_jiffies ++ } In this instance, SystemTap understands that count_jiffies is an integer. Because no initial value was assigned to count_jiffies , its initial value is zero by default. 3.3.2. Conditional Statements In some cases, the output of a SystemTap script may be too big. To address this, you need to further refine the script's logic in order to delimit the output into something more relevant or useful to your probe. You can do this by using conditionals in handlers. SystemTap accepts the following types of conditional statements: If/Else Statements Format: if ( condition ) statement1 else statement2 The statement1 is executed if the condition expression is non-zero. The statement2 is executed if the condition expression is zero. The else clause ( else statement2 ) is optional. Both statement1 and statement2 can be statement blocks. Example 3.9. ifelse.stp global countread, countnonread probe kernel.function("vfs_read"),kernel.function("vfs_write") { if (probefunc()=="vfs_read") countread ++ else countnonread ++ } probe timer.s(5) { exit() } probe end { printf("VFS reads total %d\n VFS writes total %d\n", countread, countnonread) } Example 3.9, "ifelse.stp" is a script that counts how many virtual file system reads ( vfs_read ) and writes ( vfs_write ) the system performs within a 5-second span. When run, the script increments the value of the variable countread by 1 if the name of the function it probed matches vfs_read (as noted by the condition if (probefunc()=="vfs_read") ); otherwise, it increments countnonread ( else {countnonread ++} ). While Loops Format: while ( condition ) statement So long as condition is non-zero the block of statements in statement are executed. The statement is often a statement block and it must change a value so condition will eventually be zero. For Loops Format: for ( initialization ; conditional ; increment ) statement The for loop is simply shorthand for a while loop. The following is the equivalent while loop: initialization while ( conditional ) { statement increment } Conditional Operators Aside from == (is equal to), you can also use the following operators in your conditional statements: >= Greater than or equal to <= Less than or equal to != Is not equal to 3.3.3. Command-Line Arguments You can also allow a SystemTap script to accept simple command-line arguments using a USD or @ immediately followed by the number of the argument on the command line. Use USD if you are expecting the user to enter an integer as a command-line argument, and @ if you are expecting a string. Example 3.10. commandlineargs.stp probe kernel.function(@1) { } probe kernel.function(@1).return { } Example 3.10, "commandlineargs.stp" is similar to Example 3.1, "wildcards.stp" , except that it allows you to pass the kernel function to be probed as a command-line argument (as in stap commandlineargs.stp kernel function ). You can also specify the script to accept multiple command-line arguments, noting them as @1 , @2 , and so on, in the order they are entered by the user.
|
[
"global count_jiffies, count_ms probe timer.jiffies(100) { count_jiffies ++ } probe timer.ms(100) { count_ms ++ } probe timer.ms(12345) { hz=(1000*count_jiffies) / count_ms printf (\"jiffies:ms ratio %d:%d => CONFIG_HZ=%d\\n\", count_jiffies, count_ms, hz) exit () }",
"probe timer.jiffies(100) { count_jiffies ++ }",
"if ( condition ) statement1 else statement2",
"global countread, countnonread probe kernel.function(\"vfs_read\"),kernel.function(\"vfs_write\") { if (probefunc()==\"vfs_read\") countread ++ else countnonread ++ } probe timer.s(5) { exit() } probe end { printf(\"VFS reads total %d\\n VFS writes total %d\\n\", countread, countnonread) }",
"while ( condition ) statement",
"for ( initialization ; conditional ; increment ) statement",
"initialization while ( conditional ) { statement increment }",
"probe kernel.function(@1) { } probe kernel.function(@1).return { }"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/scriptconstructions
|
Configuring dynamic plugins
|
Configuring dynamic plugins Red Hat Developer Hub 1.4 Red Hat Customer Content Services
|
[
"argocd: appLocatorMethods: - type: 'config' instances: - name: argoInstance1 url: https://argoInstance1.com username: USD{ARGOCD_USERNAME} password: USD{ARGOCD_PASSWORD} - name: argoInstance2 url: https://argoInstance2.com username: USD{ARGOCD_USERNAME} password: USD{ARGOCD_PASSWORD}",
"annotations: # The label that Argo CD uses to fetch all the applications. The format to be used is label.key=label.value. For example, rht-gitops.com/janus-argocd=quarkus-app. argocd/app-selector: 'USD{ARGOCD_LABEL_SELECTOR}'",
"annotations: # The Argo CD instance name used in `app-config.yaml`. argocd/instance-name: 'USD{ARGOCD_INSTANCE}'",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/backstage-community-plugin-redhat-argocd disabled: false",
"kubernetes: customResources: - group: 'argoproj.io' apiVersion: 'v1alpha1' plural: 'Rollouts' - group: 'argoproj.io' apiVersion: 'v1alpha1' plural: 'analysisruns'",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - argoproj.io resources: - rollouts - analysisruns verbs: - get - list",
"apply -f <your-clusterrole-file>.yaml",
"annotations: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>",
"annotations: backstage.io/kubernetes-namespace: <RESOURCE_NAMESPACE>",
"annotations: backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'",
"labels: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>",
"labels: app.kubernetes.io/instance: <GITOPS_APPLICATION_NAME>",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-community-plugin-jfrog-artifactory disabled: false",
"proxy: endpoints: '/jfrog-artifactory/api': target: http://<hostname>:8082 # or https://<customer>.jfrog.io headers: # Authorization: 'Bearer <YOUR TOKEN>' # Change to \"false\" in case of using a self-hosted Artifactory instance with a self-signed certificate secure: true",
"metadata: annotations: 'jfrog-artifactory/image-name': '<IMAGE-NAME>'",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-community-plugin-catalog-backend-module-keycloak-dynamic disabled: false",
"catalog: providers: keycloakOrg: default: # # highlight-add-start schedule: # optional; same options as in TaskScheduleDefinition # supports cron, ISO duration, \"human duration\" as used in code frequency: { minutes: 1 } # supports ISO duration, \"human duration\" as used in code timeout: { minutes: 1 } initialDelay: { seconds: 15 } # highlight-add-end",
"catalog: providers: keycloakOrg: default: # # highlight-add-start userQuerySize: 500 # Optional groupQuerySize: 250 # Optional # highlight-add-end",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-community-plugin-nexus-repository-manager disabled: false",
"proxy: '/nexus-repository-manager': target: 'https://<NEXUS_REPOSITORY_MANAGER_URL>' headers: X-Requested-With: 'XMLHttpRequest' # Uncomment the following line to access a private Nexus Repository Manager using a token # Authorization: 'Bearer <YOUR TOKEN>' changeOrigin: true # Change to \"false\" in case of using self hosted Nexus Repository Manager instance with a self-signed certificate secure: true",
"nexusRepositoryManager: # default path is `/nexus-repository-manager` proxyPath: /custom-path",
"nexusRepositoryManager: experimentalAnnotations: true",
"metadata: annotations: # insert the chosen annotations here # example nexus-repository-manager/docker.image-name: `<ORGANIZATION>/<REPOSITORY>`,",
"kubernetes: customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - \"\" resources: - pods/log verbs: - get - list - watch - apiGroups: - tekton.dev resources: - pipelineruns - taskruns verbs: - get - list",
"annotations: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>",
"annotations: backstage.io/kubernetes-namespace: <RESOURCE_NS>",
"annotations: janus-idp.io/tekton : <BACKSTAGE_ENTITY_NAME>",
"annotations: backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'",
"labels: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-community-plugin-tekton disabled: false",
"auth: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-community-plugin-topology disabled: false",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - route.openshift.io resources: - routes verbs: - get - list",
"kubernetes: customResources: - group: 'route.openshift.io' apiVersion: 'v1' plural: 'routes'",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - '' resources: - pods - pods/log verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - tekton.dev resources: - pipelines - pipelineruns - taskruns verbs: - get - list",
"kubernetes: customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelines' - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' plural: 'taskruns'",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - kubevirt.io resources: - virtualmachines - virtualmachineinstances verbs: - get - list",
"kubernetes: customResources: - group: 'kubevirt.io' apiVersion: 'v1' plural: 'virtualmachines' - group: 'kubevirt.io' apiVersion: 'v1' plural: 'virtualmachineinstances'",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - org.eclipse.che resources: - checlusters verbs: - get - list",
"kubernetes: customResources: - group: 'org.eclipse.che' apiVersion: 'v2' plural: 'checlusters'",
"annotations: app.openshift.io/vcs-uri: <GIT_REPO_URL>",
"annotations: app.openshift.io/vcs-ref: <GIT_REPO_BRANCH>",
"annotations: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>",
"labels: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>`",
"annotations: backstage.io/kubernetes-namespace: <RESOURCE_NS>",
"annotations: backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'",
"annotations: backstage.io/kubernetes-label-selector: 'component in (<BACKSTAGE_ENTITY_NAME>,che)'",
"labels: component: che # add this label to your che cluster instance labels: component: <BACKSTAGE_ENTITY_NAME> # add this label to the other resources associated with your entity",
"labels: app.openshift.io/runtime: <RUNTIME_NAME>",
"labels: app.kubernetes.io/name: <RUNTIME_NAME>",
"labels: app.kubernetes.io/part-of: <GROUP_NAME>",
"annotations: app.openshift.io/connects-to: '[{\"apiVersion\": <RESOURCE_APIVERSION>,\"kind\": <RESOURCE_KIND>,\"name\": <RESOURCE_NAME>}]'",
"plugins: - package: ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import disabled: false",
"p, role:default/bulk-import, bulk.import, use, allow g, user:default/ <your_user> , role:default/bulk-import",
"{ \"actor\": { \"actorId\": \"user:default/myuser\", \"hostname\": \"localhost\", \"ip\": \"::1\", \"userAgent\": \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36\" }, \"eventName\": \"BulkImportFindAllOrganizations\", \"isAuditLog\": true, \"level\": \"info\", \"message\": \"'get /organizations' endpoint hit by user:default/myuser\", \"meta\": {}, \"plugin\": \"bulk-import\", \"request\": { \"body\": {}, \"method\": \"GET\", \"params\": {}, \"query\": { \"pagePerIntegration\": \"1\", \"sizePerIntegration\": \"5\" }, \"url\": \"/api/bulk-import/organizations?pagePerIntegration=1&sizePerIntegration=5\" }, \"response\": { \"status\": 200 }, \"service\": \"backstage\", \"stage\": \"completion\", \"status\": \"succeeded\", \"timestamp\": \"2024-08-26 16:41:02\" }",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-community-plugin-scaffolder-backend-module-servicenow-dynamic disabled: false",
"servicenow: # The base url of the ServiceNow instance. baseUrl: USD{SERVICENOW_BASE_URL} # The username to use for authentication. username: USD{SERVICENOW_USERNAME} # The password to use for authentication. password: USD{SERVICENOW_PASSWORD}",
"// Create the BackendFeature export const customRootHttpServerFactory: BackendFeature = rootHttpRouterServiceFactory({ configure: ({ app, routes, middleware, logger }) => { logger.info( 'Using custom root HttpRouterServiceFactory configure function', ); app.use(middleware.helmet()); app.use(middleware.cors()); app.use(middleware.compression()); app.use(middleware.logging()); // Add a the custom middleware function before all // of the route handlers app.use(addTestHeaderMiddleware({ logger })); app.use(routes); app.use(middleware.notFound()); app.use(middleware.error()); }, }); // Export the BackendFeature as the default entrypoint export default customRootHttpServerFactory;"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html-single/configuring_dynamic_plugins/%7Blinkgettingstartedguide%7D
|
Chapter 4. Reviewing inventories with Automation content navigator
|
Chapter 4. Reviewing inventories with Automation content navigator As a content creator, you can review your Ansible inventory with Automation content navigator and interactively delve into the groups and hosts. 4.1. Reviewing inventory from Automation content navigator You can review Ansible inventories with the Automation content navigator text-based user interface in interactive mode and delve into groups and hosts for more details. Prerequisites A valid inventory file or an inventory plugin. Procedure Start Automation content navigator. USD ansible-navigator Optional: type ansible-navigator inventory -i simple_inventory.yml from the command line to view the inventory. Review the inventory. :inventory -i simple_inventory.yml TITLE DESCRIPTION 0│Browse groups Explore each inventory group and group members members 1│Browse hosts Explore the inventory with a list of all hosts Type 0 to brows the groups. NAME TAXONOMY TYPE 0│general all group 1│nodes all group 2│ungrouped all group The TAXONOMY field details the hierarchy of groups the selected group or node belongs to. Type the number corresponding to the group you want to delve into. NAME TAXONOMY TYPE 0│node-0 all▸nodes host 1│node-1 all▸nodes host 2│node-2 all▸nodes host Type the number corresponding to the host you want to delve into, or type :<number> for numbers greater than 9. [node-1] 0│--- 1│ansible_host: node-1.example.com 2│inventory_hostname: node-1 Verification Review the inventory output. TITLE DESCRIPTION 0│Browse groups Explore each inventory group and group members members 1│Browse hosts Explore the inventory with a list of all hosts Additional resources ansible-inventory . Introduction to Ansible inventories .
|
[
"ansible-navigator",
":inventory -i simple_inventory.yml TITLE DESCRIPTION 0│Browse groups Explore each inventory group and group members members 1│Browse hosts Explore the inventory with a list of all hosts",
"NAME TAXONOMY TYPE 0│general all group 1│nodes all group 2│ungrouped all group",
"NAME TAXONOMY TYPE 0│node-0 all▸nodes host 1│node-1 all▸nodes host 2│node-2 all▸nodes host",
"[node-1] 0│--- 1│ansible_host: node-1.example.com 2│inventory_hostname: node-1",
"TITLE DESCRIPTION 0│Browse groups Explore each inventory group and group members members 1│Browse hosts Explore the inventory with a list of all hosts"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/automation_content_navigator_creator_guide/assembly-review-inventory-navigator_ansible-navigator
|
1.2. Default Cgroup Hierarchies
|
1.2. Default Cgroup Hierarchies By default, systemd automatically creates a hierarchy of slice , scope and service units to provide a unified structure for the cgroup tree. With the systemctl command, you can further modify this structure by creating custom slices, as shown in Section 2.1, "Creating Control Groups" . Also, systemd automatically mounts hierarchies for important kernel resource controllers (see Available Controllers in Red Hat Enterprise Linux 7 ) in the /sys/fs/cgroup/ directory. Warning The deprecated cgconfig tool from the libcgroup package is available to mount and handle hierarchies for controllers not yet supported by systemd (most notably the net-prio controller). Never use libcgropup tools to modify the default hierarchies mounted by systemd since it would lead to unexpected behavior. The libcgroup library will be removed in future versions of Red Hat Enterprise Linux. For more information on how to use cgconfig , see Chapter 3, Using libcgroup Tools . Systemd Unit Types All processes running on the system are child processes of the systemd init process. Systemd provides three unit types that are used for the purpose of resource control (for a complete list of systemd 's unit types, see the chapter called Managing Services with systemd in Red Hat Enterprise Linux 7 System Administrator's Guide ): Service - A process or a group of processes, which systemd started based on a unit configuration file. Services encapsulate the specified processes so that they can be started and stopped as one set. Services are named in the following way: name . service Where name stands for the name of the service. Scope - A group of externally created processes. Scopes encapsulate processes that are started and stopped by arbitrary processes through the fork() function and then registered by systemd at runtime. For instance, user sessions, containers, and virtual machines are treated as scopes. Scopes are named as follows: name . scope Here, name stands for the name of the scope. Slice - A group of hierarchically organized units. Slices do not contain processes, they organize a hierarchy in which scopes and services are placed. The actual processes are contained in scopes or in services. In this hierarchical tree, every name of a slice unit corresponds to the path to a location in the hierarchy. The dash (" - ") character acts as a separator of the path components. For example, if the name of a slice looks as follows: parent - name . slice it means that a slice called parent - name . slice is a subslice of the parent . slice . This slice can have its own subslice named parent - name - name2 . slice , and so on. There is one root slice denoted as: -.slice Service, scope, and slice units directly map to objects in the cgroup tree. When these units are activated, they map directly to cgroup paths built from the unit names. For example, the ex.service residing in the test-waldo.slice is mapped to the cgroup test.slice/test-waldo.slice/ex.service/ . Services, scopes, and slices are created manually by the system administrator or dynamically by programs. By default, the operating system defines a number of built-in services that are necessary to run the system. Also, there are four slices created by default: -.slice - the root slice; system.slice - the default place for all system services; user.slice - the default place for all user sessions; machine.slice - the default place for all virtual machines and Linux containers. Note that all user sessions are automatically placed in a separated scope unit, as well as virtual machines and container processes. Furthermore, all users are assigned with an implicit subslice. Besides the above default configuration, the system administrator can define new slices and assign services and scopes to them. The following tree is a simplified example of a cgroup tree. This output was generated with the systemd-cgls command described in Section 2.4, "Obtaining Information about Control Groups" : As you can see, services and scopes contain processes and are placed in slices that do not contain processes of their own. The only exception is PID 1 that is located in the special systemd slice marked as -.slice . Also note that -.slice is not shown as it is implicitly identified with the root of the entire tree. Service and slice units can be configured with persistent unit files as described in Section 2.3.2, "Modifying Unit Files" , or created dynamically at runtime by API calls to PID 1 (see the section called "Online Documentation" for API reference). Scope units can be created only dynamically. Units created dynamically with API calls are transient and exist only during runtime. Transient units are released automatically as soon as they finish, get deactivated, or the system is rebooted.
|
[
"├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 20 ├─user.slice │ └─user-1000.slice │ └─session-1.scope │ ├─11459 gdm-session-worker [pam/gdm-password] │ ├─11471 gnome-session --session gnome-classic │ ├─11479 dbus-launch --sh-syntax --exit-with-session │ ├─11480 /bin/dbus-daemon --fork --print-pid 4 --print-address 6 --session │ │ └─system.slice ├─systemd-journald.service │ └─422 /usr/lib/systemd/systemd-journald ├─bluetooth.service │ └─11691 /usr/sbin/bluetoothd -n ├─systemd-localed.service │ └─5328 /usr/lib/systemd/systemd-localed ├─colord.service │ └─5001 /usr/libexec/colord ├─sshd.service │ └─1191 /usr/sbin/sshd -D │"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/sec-default_cgroup_hierarchies
|
Part I. Release notes for Red Hat Build of OptaPlanner 8.38
|
Part I. Release notes for Red Hat Build of OptaPlanner 8.38 These release notes list new features and provide upgrade instructions for Red Hat Build of OptaPlanner 8.38.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/assembly-release-notes
|
Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service
|
Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service Red Hat Developer Hub 1.2 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/installing_red_hat_developer_hub_on_microsoft_azure_kubernetes_service/index
|
Chapter 1. The OpenStack Client
|
Chapter 1. The OpenStack Client The openstack client is a common OpenStack command-line interface (CLI). This chapter documents the main options for openstack version 6.2.1 . Command-line interface to the OpenStack APIs Usage: Table 1.1. Command arguments Value Summary --version Show program's version number and exit -v, --verbose Increase verbosity of output. can be repeated. -q, --quiet Suppress output except warnings and errors. --log-file LOG_FILE Specify a file to log output. disabled by default. -h, --help Show help message and exit. --debug Show tracebacks on errors. --os-cloud <cloud-config-name> Cloud name in clouds.yaml (env: os_cloud) --os-region-name <auth-region-name> Authentication region name (env: os_region_name) --os-cacert <ca-bundle-file> Ca certificate bundle file (env: os_cacert) --os-cert <certificate-file> Client certificate bundle file (env: os_cert) --os-key <key-file> Client certificate key file (env: os_key) --verify Verify server certificate (default) --insecure Disable server certificate verification --os-default-domain <auth-domain> Default domain id, default=default. (env: OS_DEFAULT_DOMAIN) --os-interface <interface> Select an interface type. valid interface types: [admin, public, internal]. default=public, (Env: OS_INTERFACE) --os-service-provider <service_provider> Authenticate with and perform the command on a service provider using Keystone-to-keystone federation. Must also specify the remote project option. --os-remote-project-name <remote_project_name> Project name when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-id <remote_project_id> Project id when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-domain-name <remote_project_domain_name> Domain name of the project when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-domain-id <remote_project_domain_id> Domain id of the project when authenticating to a service provider if using Keystone-to-Keystone federation. --timing Print api call timing info --os-beta-command Enable beta commands which are subject to change --os-profile hmac-key Hmac key for encrypting profiling context data --os-compute-api-version <compute-api-version> Compute api version, default=2.1 (env: OS_COMPUTE_API_VERSION) --os-identity-api-version <identity-api-version> Identity api version, default=3 (env: OS_IDENTITY_API_VERSION) --os-image-api-version <image-api-version> Image api version, default=2 (env: OS_IMAGE_API_VERSION) --os-network-api-version <network-api-version> Network api version, default=2.0 (env: OS_NETWORK_API_VERSION) --os-object-api-version <object-api-version> Object api version, default=1 (env: OS_OBJECT_API_VERSION) --os-volume-api-version <volume-api-version> Volume api version, default=3 (env: OS_VOLUME_API_VERSION) --os-alarming-api-version <alarming-api-version> Queues api version, default=2 (env: OS_ALARMING_API_VERSION) --os-metrics-api-version <metrics-api-version> Metrics api version, default=1 (env: OS_METRICS_API_VERSION) --os-key-manager-api-version <key-manager-api-version> Barbican api version, default=1 (env: OS_KEY_MANAGER_API_VERSION) --os-dns-api-version <dns-api-version> Dns api version, default=2 (env: os_dns_api_version) --os-orchestration-api-version <orchestration-api-version> Orchestration api version, default=1 (env: OS_ORCHESTRATION_API_VERSION) --os-baremetal-api-version <baremetal-api-version> Bare metal api version, default="latest" (the maximum version supported by both the client and the server). (Env: OS_BAREMETAL_API_VERSION) --os-share-api-version <shared-file-system-api-version> Shared file system api version, default=2.75version supported by both the client and the server). (Env: OS_SHARE_API_VERSION) --os-loadbalancer-api-version <loadbalancer-api-version> Osc plugin api version, default=2.0 (env: OS_LOADBALANCER_API_VERSION) --os-queues-api-version <queues-api-version> Queues api version, default=2 (env: OS_QUEUES_API_VERSION) --os-auth-type <auth-type> Select an authentication type. available types: v3oidcclientcredentials, v3password, v3adfspassword, v3samlpassword, aodh-noauth, v3oauth1, v3token, v2password, v3tokenlessauth, gnocchi-noauth, v3oidcauthcode, v3applicationcredential, password, v3oidcaccesstoken, v3multifactor, v3oauth2clientcredential, v3totp, gnocchi-basic, v1password, v3oidcpassword, none, v2token, admin_token, token, http_basic, noauth. Default: selected based on --os-username/--os-token (Env: OS_AUTH_TYPE) --os-auth-url <auth-auth-url> With v3oidcclientcredentials: authentication url with v3password: Authentication URL With v3adfspassword: Authentication URL With v3samlpassword: Authentication URL With v3oauth1: Authentication URL With v3token: Authentication URL With v2password: Authentication URL With v3tokenlessauth: Authentication URL With v3oidcauthcode: Authentication URL With v3applicationcredential: Authentication URL With password: Authentication URL With v3oidcaccesstoken: Authentication URL With v3multifactor: Authentication URL With v3oauth2clientcredential: Authentication URL With v3totp: Authentication URL With v1password: Authentication URL With v3oidcpassword: Authentication URL With v2token: Authentication URL With token: Authentication URL (Env: OS_AUTH_URL) --os-system-scope <auth-system-scope> With v3oidcclientcredentials: scope for system operations With v3password: Scope for system operations With v3adfspassword: Scope for system operations With v3samlpassword: Scope for system operations With v3token: Scope for system operations With v3oidcauthcode: Scope for system operations With v3applicationcredential: Scope for system operations With password: Scope for system operations With v3oidcaccesstoken: Scope for system operations With v3multifactor: Scope for system operations With v3oauth2clientcredential: Scope for system operations With v3totp: Scope for system operations With v3oidcpassword: Scope for system operations With token: Scope for system operations (Env: OS_SYSTEM_SCOPE) --os-domain-id <auth-domain-id> With v3oidcclientcredentials: domain id to scope to With v3password: Domain ID to scope to With v3adfspassword: Domain ID to scope to With v3samlpassword: Domain ID to scope to With v3token: Domain ID to scope to With v3tokenlessauth: Domain ID to scope to With v3oidcauthcode: Domain ID to scope to With v3applicationcredential: Domain ID to scope to With password: Domain ID to scope to With v3oidcaccesstoken: Domain ID to scope to With v3multifactor: Domain ID to scope to With v3oauth2clientcredential: Domain ID to scope to With v3totp: Domain ID to scope to With v3oidcpassword: Domain ID to scope to With token: Domain ID to scope to (Env: OS_DOMAIN_ID) --os-domain-name <auth-domain-name> With v3oidcclientcredentials: domain name to scope to With v3password: Domain name to scope to With v3adfspassword: Domain name to scope to With v3samlpassword: Domain name to scope to With v3token: Domain name to scope to With v3tokenlessauth: Domain name to scope to With v3oidcauthcode: Domain name to scope to With v3applicationcredential: Domain name to scope to With password: Domain name to scope to With v3oidcaccesstoken: Domain name to scope to With v3multifactor: Domain name to scope to With v3oauth2clientcredential: Domain name to scope to With v3totp: Domain name to scope to With v3oidcpassword: Domain name to scope to With token: Domain name to scope to (Env: OS_DOMAIN_NAME) --os-project-id <auth-project-id> With v3oidcclientcredentials: project id to scope to With v3password: Project ID to scope to With v3adfspassword: Project ID to scope to With v3samlpassword: Project ID to scope to With aodh- noauth: Project ID With v3token: Project ID to scope to With v3tokenlessauth: Project ID to scope to With gnocchi-noauth: Project ID With v3oidcauthcode: Project ID to scope to With v3applicationcredential: Project ID to scope to With password: Project ID to scope to With v3oidcaccesstoken: Project ID to scope to With v3multifactor: Project ID to scope to With v3oauth2clientcredential: Project ID to scope to With v3totp: Project ID to scope to With v3oidcpassword: Project ID to scope to With token: Project ID to scope to With noauth: Project ID (Env: OS_PROJECT_ID) --os-project-name <auth-project-name> With v3oidcclientcredentials: project name to scope to With v3password: Project name to scope to With v3adfspassword: Project name to scope to With v3samlpassword: Project name to scope to With v3token: Project name to scope to With v3tokenlessauth: Project name to scope to With v3oidcauthcode: Project name to scope to With v3applicationcredential: Project name to scope to With password: Project name to scope to With v3oidcaccesstoken: Project name to scope to With v3multifactor: Project name to scope to With v3oauth2clientcredential: Project name to scope to With v3totp: Project name to scope to With v1password: Swift account to use With v3oidcpassword: Project name to scope to With token: Project name to scope to (Env: OS_PROJECT_NAME) --os-project-domain-id <auth-project-domain-id> With v3oidcclientcredentials: domain id containing project With v3password: Domain ID containing project With v3adfspassword: Domain ID containing project With v3samlpassword: Domain ID containing project With v3token: Domain ID containing project With v3tokenlessauth: Domain ID containing project With v3oidcauthcode: Domain ID containing project With v3applicationcredential: Domain ID containing project With password: Domain ID containing project With v3oidcaccesstoken: Domain ID containing project With v3multifactor: Domain ID containing project With v3oauth2clientcredential: Domain ID containing project With v3totp: Domain ID containing project With v3oidcpassword: Domain ID containing project With token: Domain ID containing project (Env: OS_PROJECT_DOMAIN_ID) --os-project-domain-name <auth-project-domain-name> With v3oidcclientcredentials: domain name containing project With v3password: Domain name containing project With v3adfspassword: Domain name containing project With v3samlpassword: Domain name containing project With v3token: Domain name containing project With v3tokenlessauth: Domain name containing project With v3oidcauthcode: Domain name containing project With v3applicationcredential: Domain name containing project With password: Domain name containing project With v3oidcaccesstoken: Domain name containing project With v3multifactor: Domain name containing project With v3oauth2clientcredential: Domain name containing project With v3totp: Domain name containing project With v3oidcpassword: Domain name containing project With token: Domain name containing project (Env: OS_PROJECT_DOMAIN_NAME) --os-trust-id <auth-trust-id> With v3oidcclientcredentials: id of the trust to use as a trustee use With v3password: ID of the trust to use as a trustee use With v3adfspassword: ID of the trust to use as a trustee use With v3samlpassword: ID of the trust to use as a trustee use With v3token: ID of the trust to use as a trustee use With v2password: ID of the trust to use as a trustee use With v3oidcauthcode: ID of the trust to use as a trustee use With v3applicationcredential: ID of the trust to use as a trustee use With password: ID of the trust to use as a trustee use With v3oidcaccesstoken: ID of the trust to use as a trustee use With v3multifactor: ID of the trust to use as a trustee use With v3oauth2clientcredential: ID of the trust to use as a trustee use With v3totp: ID of the trust to use as a trustee use With v3oidcpassword: ID of the trust to use as a trustee use With v2token: ID of the trust to use as a trustee use With token: ID of the trust to use as a trustee use (Env: OS_TRUST_ID) --os-identity-provider <auth-identity-provider> With v3oidcclientcredentials: identity provider's name With v3adfspassword: Identity Provider's name With v3samlpassword: Identity Provider's name With v3oidcauthcode: Identity Provider's name With v3oidcaccesstoken: Identity Provider's name With v3oidcpassword: Identity Provider's name (Env: OS_IDENTITY_PROVIDER) --os-protocol <auth-protocol> With v3oidcclientcredentials: protocol for federated plugin With v3adfspassword: Protocol for federated plugin With v3samlpassword: Protocol for federated plugin With v3oidcauthcode: Protocol for federated plugin With v3oidcaccesstoken: Protocol for federated plugin With v3oidcpassword: Protocol for federated plugin (Env: OS_PROTOCOL) --os-client-id <auth-client-id> With v3oidcclientcredentials: oauth 2.0 client id with v3oidcauthcode: OAuth 2.0 Client ID With v3oidcpassword: OAuth 2.0 Client ID (Env: OS_CLIENT_ID) --os-client-secret <auth-client-secret> With v3oidcclientcredentials: oauth 2.0 client secret With v3oidcauthcode: OAuth 2.0 Client Secret With v3oidcpassword: OAuth 2.0 Client Secret (Env: OS_CLIENT_SECRET) --os-openid-scope <auth-openid-scope> With v3oidcclientcredentials: openid connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. With v3oidcauthcode: OpenID Connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. With v3oidcpassword: OpenID Connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. (Env: OS_OPENID_SCOPE) --os-access-token-endpoint <auth-access-token-endpoint> With v3oidcclientcredentials: openid connect provider Token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. With v3oidcauthcode: OpenID Connect Provider Token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. With v3oidcpassword: OpenID Connect Provider Token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. (Env: OS_ACCESS_TOKEN_ENDPOINT) --os-discovery-endpoint <auth-discovery-endpoint> With v3oidcclientcredentials: openid connect discovery Document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well-known/openid- configuration With v3oidcauthcode: OpenID Connect Discovery Document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well-known/openid- configuration With v3oidcpassword: OpenID Connect Discovery Document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well-known/openid- configuration (Env: OS_DISCOVERY_ENDPOINT) --os-access-token-type <auth-access-token-type> With v3oidcclientcredentials: oauth 2.0 authorization Server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" With v3oidcauthcode: OAuth 2.0 Authorization Server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" With v3oidcpassword: OAuth 2.0 Authorization Server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" (Env: OS_ACCESS_TOKEN_TYPE) --os-user-id <auth-user-id> With v3password: user id with aodh-noauth: user id With v2password: User ID to login with With gnocchi- noauth: User ID With v3applicationcredential: User ID With password: User id With v3totp: User ID With noauth: User ID (Env: OS_USER_ID) --os-username <auth-username> With v3password: username with v3adfspassword: Username With v3samlpassword: Username With v2password: Username to login with With v3applicationcredential: Username With password: Username With v3totp: Username With v1password: Username to login with With v3oidcpassword: Username With http_basic: Username (Env: OS_USERNAME) --os-user-domain-id <auth-user-domain-id> With v3password: user's domain id with v3applicationcredential: User's domain id With password: User's domain id With v3totp: User's domain id (Env: OS_USER_DOMAIN_ID) --os-user-domain-name <auth-user-domain-name> With v3password: user's domain name with v3applicationcredential: User's domain name With password: User's domain name With v3totp: User's domain name (Env: OS_USER_DOMAIN_NAME) --os-password <auth-password> With v3password: user's password with v3adfspassword: Password With v3samlpassword: Password With v2password: Password to use With password: User's password With v1password: Password to use With v3oidcpassword: Password With http_basic: User's password (Env: OS_PASSWORD) --os-identity-provider-url <auth-identity-provider-url> With v3adfspassword: an identity provider url, where the SAML authentication request will be sent. With v3samlpassword: An Identity Provider URL, where the SAML2 authentication request will be sent. (Env: OS_IDENTITY_PROVIDER_URL) --os-service-provider-endpoint <auth-service-provider-endpoint> With v3adfspassword: service provider's endpoint (env: OS_SERVICE_PROVIDER_ENDPOINT) --os-service-provider-entity-id <auth-service-provider-entity-id> With v3adfspassword: service provider's saml entity id (Env: OS_SERVICE_PROVIDER_ENTITY_ID) --os-roles <auth-roles> With aodh-noauth: roles with gnocchi-noauth: roles (Env: OS_ROLES) --os-aodh-endpoint <auth-aodh-endpoint> With aodh-noauth: aodh endpoint (env: OS_AODH_ENDPOINT) --os-consumer-key <auth-consumer-key> With v3oauth1: oauth consumer id/key (env: OS_CONSUMER_KEY) --os-consumer-secret <auth-consumer-secret> With v3oauth1: oauth consumer secret (env: OS_CONSUMER_SECRET) --os-access-key <auth-access-key> With v3oauth1: oauth access key (env: os_access_key) --os-access-secret <auth-access-secret> With v3oauth1: oauth access secret (env: OS_ACCESS_SECRET) --os-token <auth-token> With v3token: token to authenticate with with v2token: Token With admin_token: The token that will always be used With token: Token to authenticate with (Env: OS_TOKEN) --os-endpoint <auth-endpoint> With gnocchi-noauth: gnocchi endpoint with gnocchi- basic: Gnocchi endpoint With none: The endpoint that will always be used With admin_token: The endpoint that will always be used With http_basic: The endpoint that will always be used With noauth: Cinder endpoint (Env: OS_ENDPOINT) --os-redirect-uri <auth-redirect-uri> With v3oidcauthcode: openid connect redirect url (env: OS_REDIRECT_URI) --os-code <auth-code> With v3oidcauthcode: oauth 2.0 authorization code (Env: OS_CODE) --os-application-credential-secret <auth-application-credential-secret> With v3applicationcredential: application credential auth secret (Env: OS_APPLICATION_CREDENTIAL_SECRET) --os-application-credential-id <auth-application-credential-id> With v3applicationcredential: application credential ID (Env: OS_APPLICATION_CREDENTIAL_ID) --os-application-credential-name <auth-application-credential-name> With v3applicationcredential: application credential name (Env: OS_APPLICATION_CREDENTIAL_NAME) --os-default-domain-id <auth-default-domain-id> With password: optional domain id to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. With token: Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. (Env: OS_DEFAULT_DOMAIN_ID) --os-default-domain-name <auth-default-domain-name> With password: optional domain name to use with v3 api and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. With token: Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. (Env: OS_DEFAULT_DOMAIN_NAME) --os-access-token <auth-access-token> With v3oidcaccesstoken: oauth 2.0 access token (env: OS_ACCESS_TOKEN) --os-auth-methods <auth-auth-methods> With v3multifactor: methods to authenticate with. (Env: OS_AUTH_METHODS) --os-oauth2-endpoint <auth-oauth2-endpoint> With v3oauth2clientcredential: endpoint for oauth2.0 (Env: OS_OAUTH2_ENDPOINT) --os-oauth2-client-id <auth-oauth2-client-id> With v3oauth2clientcredential: client id for oauth2.0 (Env: OS_OAUTH2_CLIENT_ID) --os-oauth2-client-secret <auth-oauth2-client-secret> With v3oauth2clientcredential: client secret for OAuth2.0 (Env: OS_OAUTH2_CLIENT_SECRET) --os-passcode <auth-passcode> With v3totp: user's totp passcode (env: os_passcode) --os-user <auth-user> With gnocchi-basic: user (env: os_user)
|
[
"openstack [--version] [-v | -q] [--log-file LOG_FILE] [-h] [--debug] [--os-cloud <cloud-config-name>] [--os-region-name <auth-region-name>] [--os-cacert <ca-bundle-file>] [--os-cert <certificate-file>] [--os-key <key-file>] [--verify | --insecure] [--os-default-domain <auth-domain>] [--os-interface <interface>] [--os-service-provider <service_provider>] [--os-remote-project-name <remote_project_name> | --os-remote-project-id <remote_project_id>] [--os-remote-project-domain-name <remote_project_domain_name> | --os-remote-project-domain-id <remote_project_domain_id>] [--timing] [--os-beta-command] [--os-profile hmac-key] [--os-compute-api-version <compute-api-version>] [--os-identity-api-version <identity-api-version>] [--os-image-api-version <image-api-version>] [--os-network-api-version <network-api-version>] [--os-object-api-version <object-api-version>] [--os-volume-api-version <volume-api-version>] [--os-alarming-api-version <alarming-api-version>] [--os-metrics-api-version <metrics-api-version>] [--os-key-manager-api-version <key-manager-api-version>] [--os-dns-api-version <dns-api-version>] [--os-orchestration-api-version <orchestration-api-version>] [--os-baremetal-api-version <baremetal-api-version>] [--os-share-api-version <shared-file-system-api-version>] [--os-loadbalancer-api-version <loadbalancer-api-version>] [--os-queues-api-version <queues-api-version>] [--os-auth-type <auth-type>] [--os-auth-url <auth-auth-url>] [--os-system-scope <auth-system-scope>] [--os-domain-id <auth-domain-id>] [--os-domain-name <auth-domain-name>] [--os-project-id <auth-project-id>] [--os-project-name <auth-project-name>] [--os-project-domain-id <auth-project-domain-id>] [--os-project-domain-name <auth-project-domain-name>] [--os-trust-id <auth-trust-id>] [--os-identity-provider <auth-identity-provider>] [--os-protocol <auth-protocol>] [--os-client-id <auth-client-id>] [--os-client-secret <auth-client-secret>] [--os-openid-scope <auth-openid-scope>] [--os-access-token-endpoint <auth-access-token-endpoint>] [--os-discovery-endpoint <auth-discovery-endpoint>] [--os-access-token-type <auth-access-token-type>] [--os-user-id <auth-user-id>] [--os-username <auth-username>] [--os-user-domain-id <auth-user-domain-id>] [--os-user-domain-name <auth-user-domain-name>] [--os-password <auth-password>] [--os-identity-provider-url <auth-identity-provider-url>] [--os-service-provider-endpoint <auth-service-provider-endpoint>] [--os-service-provider-entity-id <auth-service-provider-entity-id>] [--os-roles <auth-roles>] [--os-aodh-endpoint <auth-aodh-endpoint>] [--os-consumer-key <auth-consumer-key>] [--os-consumer-secret <auth-consumer-secret>] [--os-access-key <auth-access-key>] [--os-access-secret <auth-access-secret>] [--os-token <auth-token>] [--os-endpoint <auth-endpoint>] [--os-redirect-uri <auth-redirect-uri>] [--os-code <auth-code>] [--os-application-credential-secret <auth-application-credential-secret>] [--os-application-credential-id <auth-application-credential-id>] [--os-application-credential-name <auth-application-credential-name>] [--os-default-domain-id <auth-default-domain-id>] [--os-default-domain-name <auth-default-domain-name>] [--os-access-token <auth-access-token>] [--os-auth-methods <auth-auth-methods>] [--os-oauth2-endpoint <auth-oauth2-endpoint>] [--os-oauth2-client-id <auth-oauth2-client-id>] [--os-oauth2-client-secret <auth-oauth2-client-secret>] [--os-passcode <auth-passcode>] [--os-user <auth-user>]"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/the_openstack_client
|
Chapter 2. Red Hat Quay support
|
Chapter 2. Red Hat Quay support Red Hat Quay provides support for the following: Multiple authentication and access methods Multiple storage backends Custom certificates for Quay , Clair , and storage backend containers Application registries Different container image types 2.1. Architecture Red Hat Quay includes several core components, both internal and external. For a fuller architectural breakdown, see the Red Hat Quay architecture guide. 2.1.1. Internal components Red Hat Quay includes the following internal components: Quay (container registry) . Runs the Quay container as a service, consisting of several components in the pod. Clair . Scans container images for vulnerabilities and suggests fixes. 2.1.2. External components Red Hat Quay includes the following external components: Database . Used by Red Hat Quay as its primary metadata storage. Note that this is not for image storage. Redis (key-value store) . Stores live builder logs and the Red Hat Quay tutorial. Also includes the locking mechanism that is required for garbage collection. Cloud storage . For supported deployments, one of the following storage types must be used: Public cloud storage . In public cloud environments, you should use the cloud provider's object storage, such as Amazon Web Services's Amazon S3 or Google Cloud's Google Cloud Storage. Private cloud storage . In private clouds, an S3 or Swift compliant Object Store is needed, such as Ceph RADOS, or OpenStack Swift. Warning Do not use "Locally mounted directory" Storage Engine for any production configurations. Mounted NFS volumes are not supported. Local storage is meant for Red Hat Quay test-only installations.
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/deploy_red_hat_quay_-_high_availability/poc-support
|
Chapter 2. Streams for Apache Kafka 2.9 Long Term Support
|
Chapter 2. Streams for Apache Kafka 2.9 Long Term Support Streams for Apache Kafka 2.9 is a Long Term Support (LTS) offering for Streams for Apache Kafka. For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy .
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_openshift/ref-lts-str
|
Providing feedback on Red Hat JBoss Core Services documentation
|
Providing feedback on Red Hat JBoss Core Services documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_3_release_notes/providing-direct-documentation-feedback_2.4.57-release-notes
|
Chapter 4. Known issues
|
Chapter 4. Known issues Resolved known issues for this release of Red Hat Trusted Artifact Signer (RHTAS): Version number reported incorrectly on OpenShift 4.13 A list of known issues found in this release RHTAS: The ownerReferences are lost when restoring Trusted Artifact Signer to a different OpenShift cluster When restoring the RHTAS data to a new Red Hat OpenShift cluster, the ownerReferences for components are lost. This happens because the Securesign UUID changes when restoring on a new cluster, and the ownerReferences for each component gets deleted since they are no longer valid. To workaound this issue, run the provided script after the Securesign resource is restored. This script recreates the ownerReferences with the new Securesign UUID. Specifying a PVC name for the TUF repository fails the initialization process When specifying a persistent volume claim (PVC) name in The Update Framework (TUF) resource causes the RHTAS operator to fail the initialization of the TUF repository. For example: To workaround this issue, do not specify a PVC name in the TUF resource. This allows the RHTAS operator to automatically create the PVC, names it tuf , and properly initializes the TUF repository. Rekor Search UI does not show records after upgrade After upgrading the RHTAS operator to the latest version (1.0.1), the existing Rekor data is not found when searching by email address. The backfill-redis CronJob, which ensures that Rekor Search UI can query the transparency log only runs once per day, at midnight. To workaround this issue, you can trigger the backfill-redis job manually, instead of waiting until midnight. To trigger the backfill-redis job from the command-line interface, run the following command: Doing this adds the missing data back to the Rekor Search UI. The Trusted Artifact Signer operator does not apply configuration changes We found a potential issue with the RHTAS operator logic that can cause an unexpected state when redeploying. This inconsistent state can happen if removing configurations from RHTAS resources and the operator tries to redeploy those resources. To workaround this potential issue, you can delete the specific resource, and then re-create that resource by using the instance's configuration, such as keys, and persistent volumes. The RHTAS resources are: Securesign, Fulcio, The Update Framework (TUF), Rekor, Certificate Transparency (CT) log, or Trillian. For example, to delete the Securesign resource: USD oc delete Securesign securesing-sample For example, to re-create the Securesign resource from a configuration file: USD oc create -f ./securesign-sample.yaml Operator does not update the component status after doing a restore to a different OpenShift cluster When restoring the RHTAS signer data from a backup to a new OpenShift cluster, the component status links do not update as expected. Currently, you have to manually delete the securesign-sample-trillian-db-tls resource, and manually update the component status links. The RHTAS operator will automatically recreate an updated securesign-sample-trillian-db-tls resource, after it has been removed. After the backup procedure starts, and the secrets restored, delete the securesign-sample-trillian-db-tls resource: Example Once all the pods start, then update the status files for Securesign , and TimestampAuthority : Example Trusted Artifact Signer requires cosign 2.2 or later Because of recent changes to how we generate The Update Framework (TUF) repository, and making use of different checksum algorithms, we require the use of cosign version 2.2 or later. With this release of RHTAS, you can download cosign version 2.4 for use with Trusted Artifact Signer.
|
[
"spec: tuf: pvc: name: tuf-pvc-example-name",
"create job --from=cronjob/backfill-redis backfill-redis -n trusted-artifact-signer",
"oc delete Securesign securesing-sample",
"oc create -f ./securesign-sample.yaml",
"oc delete secret securesign-sample-trillian-db-tls",
"oc edit --subresource=status Securesign securesign-sample oc edit --subresource=status TimestampAuthority securesign-sample"
] |
https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html/release_notes/known-issues
|
Chapter 24. Using cgroups-v2 to control distribution of CPU time for applications
|
Chapter 24. Using cgroups-v2 to control distribution of CPU time for applications Some applications use too much CPU time, which can negatively impact the overall health of your environment. You can put your applications into control groups version 2 ( cgroups-v2 ) and configure CPU limits for those control groups. As a result, you can regulate your applications in CPU consumption. The user has two methods how to regulate distribution of CPU time allocated to a control group: Setting CPU bandwidth (editing the cpu.max controller file) Setting CPU weight (editing the cpu.weight controller file) 24.1. Mounting cgroups-v2 During the boot process, RHEL 8 mounts the cgroup-v1 virtual filesystem by default. To utilize cgroup-v2 functionality in limiting resources for your applications, manually configure the system. Prerequisites You have root permissions. Procedure Configure the system to mount cgroups-v2 by default during system boot by the systemd system and service manager: This adds the necessary kernel command-line parameter to the current boot entry. To add the systemd.unified_cgroup_hierarchy=1 parameter to all kernel boot entries: Reboot the system for the changes to take effect. Verification Verify the cgroups-v2 filesystem is mounted: The cgroups-v2 filesystem was successfully mounted on the /sys/fs/cgroup/ directory. Inspect the contents of the /sys/fs/cgroup/ directory: The /sys/fs/cgroup/ directory, also called the root control group , by default, provides interface files (starting with cgroup ) and controller-specific files such as cpuset.cpus.effective . In addition, there are some directories related to systemd , such as, /sys/fs/cgroup/init.scope , /sys/fs/cgroup/system.slice , and /sys/fs/cgroup/user.slice . Additional resources cgroups(7) , sysfs(5) manual pages 24.2. Preparing the cgroup for distribution of CPU time To control CPU consumption of your applications, you need to enable specific CPU controllers and create a dedicated control groups. It is recommended to create at least two levels of child control groups inside the /sys/fs/cgroup/ root control group to keep organizational clarity of cgroup files. Prerequisites You have root permissions. You have identified PIDs of processes that you want to control. You have mounted the cgroups-v2 file system. For more information, see Mounting cgroups-v2 . Procedure Identify the process IDs (PIDs) of applications whose CPU consumption you want to constrict: The example output reveals that PID 34578 and 34579 (two illustrative applications of sha1sum ) consume a huge amount of resources, namely CPU. Both are the example applications used to demonstrate managing the cgroups-v2 functionality. Verify that the cpu and cpuset controllers are available in the /sys/fs/cgroup/cgroup.controllers file: Enable CPU-related controllers: These commands enable the cpu and cpuset controllers for the immediate children groups of the /sys/fs/cgroup/ root control group. A child group is where you can specify processes and apply control checks to each of the processes based on your criteria. You can review the cgroup.subtree_control file at any level to identify the controllers that can be enabled in the immediate child group. Note By default, the /sys/fs/cgroup/cgroup.subtree_control file in the root control group contains memory and pids controllers. Create the /sys/fs/cgroup/Example/ directory: The /sys/fs/cgroup/Example/ directory defines a child group. Also, the step enabled the cpu and cpuset controllers for this child group. When you create the /sys/fs/cgroup/Example/ directory, some cgroups-v2 interface files and cpu and cpuset controller-specific files are automatically created in the directory. The /sys/fs/cgroup/Example/ directory also provides controller-specific files for the memory and pids controllers. Optional: Inspect the newly created child control group: The example output shows files such as cpuset.cpus and cpu.max . These files are specific to the cpuset and cpu controllers. The cpuset and cpu controllers are manually enabled for the root's ( /sys/fs/cgroup/ ) direct child control groups using the /sys/fs/cgroup/cgroup.subtree_control file. The directory also includes general cgroup control interface files such as cgroup.procs or cgroup.controllers , which are common to all control groups, regardless of enabled controllers. The files such as memory.high and pids.max relate to the memory and pids controllers, which are in the root control group ( /sys/fs/cgroup/ ), and are always enabled by default. By default, the newly created child group inherits access to all of the system's CPU and memory resources, without any limits. Enable the CPU-related controllers in /sys/fs/cgroup/Example/ to obtain controllers that are relevant only to CPU: These commands ensure that the immediate child control group will only have controllers relevant to regulate the CPU time distribution - not to memory or pids controllers. Create the /sys/fs/cgroup/Example/tasks/ directory: The /sys/fs/cgroup/Example/tasks/ directory defines a child group with files that relate purely to cpu and cpuset controllers. Optional: Inspect another child control group: Ensure the processes that you want to control for CPU time compete on the same CPU: This ensures the processes you will place in the Example/tasks child control group, compete on the same CPU. This setting is important for the cpu controller to activate. Important The cpu controller is only activated if the relevant child control group has at least 2 processes to compete for time on a single CPU. Verification Optional: Ensure the CPU-related controllers are enabled for the immediate children cgroups: Optional: Ensure the processes that you want to control for CPU time compete on the same CPU: Additional resources Introducing control groups Introducing kernel resource controllers Mounting cgroups-v2 cgroups(7) , sysfs(5) manual pages 24.3. Controlling distribution of CPU time for applications by adjusting CPU bandwidth You need to assign values to the relevant files of the cpu controller to regulate distribution of the CPU time to applications under the specific cgroup tree. Prerequisites You have root permissions. You have at least two applications for which you want to control distribution of CPU time. You ensured the relevant applications compete for CPU time on the same CPU as described in Preparing the cgroup for distribution of CPU time . You mounted cgroups-v2 filesystem as described in Mounting cgroups-v2 . You enabled cpu and cpuset controllers both in the parent control group and in child control group similarly as described in Preparing the cgroup for distribution of CPU time . You created two levels of child control groups inside the /sys/fs/cgroup/ root control group as in the example below: Procedure Configure CPU bandwidth to achieve resource restrictions within the control group: The first value is the allowed time quota in microseconds for which all processes collectively in a child group can run during one period. The second value specifies the length of the period. During a single period, when processes in a control group collectively exhaust the time specified by this quota, they are throttled for the remainder of the period and not allowed to run until the period. This command sets CPU time distribution controls so that all processes collectively in the /sys/fs/cgroup/Example/tasks child group can run on the CPU for only 0.2 seconds of every 1 second. That is, one fifth of each second. Optional: Verify the time quotas: Add the applications' PIDs to the Example/tasks child group: The example commands ensure that required applications become members of the Example/tasks child group and do not exceed the CPU time distribution configured for this child group. Verification Verify that the applications run in the specified control group: The output above shows the processes of the specified applications that run in the Example/tasks child group. Inspect the current CPU consumption of the throttled applications: Notice that the CPU consumption for the PID 34578 and PID 34579 has decreased to 10%. The Example/tasks child group regulates its processes to 20% of the CPU time collectively. Since there are 2 processes in the control group, each can utilize 10% of the CPU time. 24.4. Controlling distribution of CPU time for applications by adjusting CPU weight You need to assign values to the relevant files of the cpu controller to regulate distribution of the CPU time to applications under the specific cgroup tree. Prerequisites You have root permissions. You have applications for which you want to control distribution of CPU time. You ensured the relevant applications compete for CPU time on the same CPU as described in Preparing the cgroup for distribution of CPU time . You mounted cgroups-v2 filesystem as described in Mounting cgroups-v2 . You created a two level hierarchy of child control groups inside the /sys/fs/cgroup/ root control group as in the following example: You enabled cpu and cpuset controllers in the parent control group and in child control groups similarly as described in Preparing the cgroup for distribution of CPU time . Procedure Configure desired CPU weights to achieve resource restrictions within the control groups: Add the applications' PIDs to the g1 , g2 , and g3 child groups: The example commands ensure that desired applications become members of the Example/g*/ child cgroups and will get their CPU time distributed as per the configuration of those cgroups. The weights of the children cgroups ( g1 , g2 , g3 ) that have running processes are summed up at the level of the parent cgroup ( Example ). The CPU resource is then distributed proportionally based on the respective weights. As a result, when all processes run at the same time, the kernel allocates to each of them the proportionate CPU time based on their respective cgroup's cpu.weight file: Child cgroup cpu.weight file CPU time allocation g1 150 ~50% (150/300) g2 100 ~33% (100/300) g3 50 ~16% (50/300) The value of the cpu.weight controller file is not a percentage. If one process stopped running, leaving cgroup g2 with no running processes, the calculation would omit the cgroup g2 and only account weights of cgroups g1 and g3 : Child cgroup cpu.weight file CPU time allocation g1 150 ~75% (150/200) g3 50 ~25% (50/200) Important If a child cgroup has multiple running processes, the CPU time allocated to the cgroup is distributed equally among its member processes. Verification Verify that the applications run in the specified control groups: The command output shows the processes of the specified applications that run in the Example/g*/ child cgroups. Inspect the current CPU consumption of the throttled applications: Note All processes run on a single CPU for clear illustration. The CPU weight applies the same principles when used on multiple CPUs. Notice that the CPU resource for the PID 33373 , PID 33374 , and PID 33377 was allocated based on the 150, 100, and 50 weights you assigned to the respective child cgroups. The weights correspond to around 50%, 33%, and 16% allocation of CPU time for each application.
|
[
"grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"systemd.unified_cgroup_hierarchy=1\"",
"grubby --update-kernel=ALL --args=\"systemd.unified_cgroup_hierarchy=1\"",
"mount -l | grep cgroup cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate)",
"ll /sys/fs/cgroup/ -r- r- r--. 1 root root 0 Apr 29 12:03 cgroup.controllers -rw-r- r--. 1 root root 0 Apr 29 12:03 cgroup.max.depth -rw-r- r--. 1 root root 0 Apr 29 12:03 cgroup.max.descendants -rw-r- r--. 1 root root 0 Apr 29 12:03 cgroup.procs -r- r- r--. 1 root root 0 Apr 29 12:03 cgroup.stat -rw-r- r--. 1 root root 0 Apr 29 12:18 cgroup.subtree_control -rw-r- r--. 1 root root 0 Apr 29 12:03 cgroup.threads -rw-r- r--. 1 root root 0 Apr 29 12:03 cpu.pressure -r- r- r--. 1 root root 0 Apr 29 12:03 cpuset.cpus.effective -r- r- r--. 1 root root 0 Apr 29 12:03 cpuset.mems.effective -r- r- r--. 1 root root 0 Apr 29 12:03 cpu.stat drwxr-xr-x. 2 root root 0 Apr 29 12:03 init.scope -rw-r- r--. 1 root root 0 Apr 29 12:03 io.pressure -r- r- r--. 1 root root 0 Apr 29 12:03 io.stat -rw-r- r--. 1 root root 0 Apr 29 12:03 memory.pressure -r- r- r--. 1 root root 0 Apr 29 12:03 memory.stat drwxr-xr-x. 69 root root 0 Apr 29 12:03 system.slice drwxr-xr-x. 3 root root 0 Apr 29 12:18 user.slice",
"top Tasks: 104 total, 3 running, 101 sleeping, 0 stopped, 0 zombie %Cpu(s): 17.6 us, 81.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.8 hi, 0.0 si, 0.0 st MiB Mem : 3737.4 total, 3312.7 free, 133.3 used, 291.4 buff/cache MiB Swap: 4060.0 total, 4060.0 free, 0.0 used. 3376.1 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 34578 root 20 0 18720 1756 1468 R 99.0 0.0 0:31.09 sha1sum 34579 root 20 0 18720 1772 1480 R 99.0 0.0 0:30.54 sha1sum 1 root 20 0 186192 13940 9500 S 0.0 0.4 0:01.60 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd 3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp",
"cat /sys/fs/cgroup/cgroup.controllers cpuset cpu io memory hugetlb pids rdma",
"echo \"+cpu\" >> /sys/fs/cgroup/cgroup.subtree_control echo \"+cpuset\" >> /sys/fs/cgroup/cgroup.subtree_control",
"mkdir /sys/fs/cgroup/Example/",
"ll /sys/fs/cgroup/Example/ -r- r- r--. 1 root root 0 Jun 1 10:33 cgroup.controllers -r- r- r--. 1 root root 0 Jun 1 10:33 cgroup.events -rw-r- r--. 1 root root 0 Jun 1 10:33 cgroup.freeze -rw-r- r--. 1 root root 0 Jun 1 10:33 cgroup.max.depth -rw-r- r--. 1 root root 0 Jun 1 10:33 cgroup.max.descendants -rw-r- r--. 1 root root 0 Jun 1 10:33 cgroup.procs -r- r- r--. 1 root root 0 Jun 1 10:33 cgroup.stat -rw-r- r--. 1 root root 0 Jun 1 10:33 cgroup.subtree_control ... -rw-r- r--. 1 root root 0 Jun 1 10:33 cpuset.cpus -r- r- r--. 1 root root 0 Jun 1 10:33 cpuset.cpus.effective -rw-r- r--. 1 root root 0 Jun 1 10:33 cpuset.cpus.partition -rw-r- r--. 1 root root 0 Jun 1 10:33 cpuset.mems -r- r- r--. 1 root root 0 Jun 1 10:33 cpuset.mems.effective -r- r- r--. 1 root root 0 Jun 1 10:33 cpu.stat -rw-r- r--. 1 root root 0 Jun 1 10:33 cpu.weight -rw-r- r--. 1 root root 0 Jun 1 10:33 cpu.weight.nice ... -r- r- r--. 1 root root 0 Jun 1 10:33 memory.events.local -rw-r- r--. 1 root root 0 Jun 1 10:33 memory.high -rw-r- r--. 1 root root 0 Jun 1 10:33 memory.low ... -r- r- r--. 1 root root 0 Jun 1 10:33 pids.current -r- r- r--. 1 root root 0 Jun 1 10:33 pids.events -rw-r- r--. 1 root root 0 Jun 1 10:33 pids.max",
"echo \"+cpu\" >> /sys/fs/cgroup/Example/cgroup.subtree_control echo \"+cpuset\" >> /sys/fs/cgroup/Example/cgroup.subtree_control",
"mkdir /sys/fs/cgroup/Example/tasks/",
"ll /sys/fs/cgroup/Example/tasks -r- r- r--. 1 root root 0 Jun 1 11:45 cgroup.controllers -r- r- r--. 1 root root 0 Jun 1 11:45 cgroup.events -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.freeze -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.max.depth -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.max.descendants -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.procs -r- r- r--. 1 root root 0 Jun 1 11:45 cgroup.stat -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.subtree_control -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.threads -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.type -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.max -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.pressure -rw-r- r--. 1 root root 0 Jun 1 11:45 cpuset.cpus -r- r- r--. 1 root root 0 Jun 1 11:45 cpuset.cpus.effective -rw-r- r--. 1 root root 0 Jun 1 11:45 cpuset.cpus.partition -rw-r- r--. 1 root root 0 Jun 1 11:45 cpuset.mems -r- r- r--. 1 root root 0 Jun 1 11:45 cpuset.mems.effective -r- r- r--. 1 root root 0 Jun 1 11:45 cpu.stat -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.weight -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.weight.nice -rw-r- r--. 1 root root 0 Jun 1 11:45 io.pressure -rw-r- r--. 1 root root 0 Jun 1 11:45 memory.pressure",
"echo \"1\" > /sys/fs/cgroup/Example/tasks/cpuset.cpus",
"cat /sys/fs/cgroup/cgroup.subtree_control /sys/fs/cgroup/Example/cgroup.subtree_control cpuset cpu memory pids cpuset cpu",
"cat /sys/fs/cgroup/Example/tasks/cpuset.cpus 1",
"... ├── Example │ ├── tasks ...",
"echo \"200000 1000000\" > /sys/fs/cgroup/Example/tasks/cpu.max",
"cat /sys/fs/cgroup/Example/tasks/cpu.max 200000 1000000",
"echo \"34578\" > /sys/fs/cgroup/Example/tasks/cgroup.procs echo \"34579\" > /sys/fs/cgroup/Example/tasks/cgroup.procs",
"cat /proc/34578/cgroup /proc/34579/cgroup 0::/Example/tasks 0::/Example/tasks",
"top top - 11:13:53 up 23:10, 1 user, load average: 0.26, 1.33, 1.66 Tasks: 104 total, 3 running, 101 sleeping, 0 stopped, 0 zombie %Cpu(s): 3.0 us, 7.0 sy, 0.0 ni, 89.5 id, 0.0 wa, 0.2 hi, 0.2 si, 0.2 st MiB Mem : 3737.4 total, 3312.6 free, 133.4 used, 291.4 buff/cache MiB Swap: 4060.0 total, 4060.0 free, 0.0 used. 3376.0 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 34578 root 20 0 18720 1756 1468 R 10.0 0.0 37:36.13 sha1sum 34579 root 20 0 18720 1772 1480 R 10.0 0.0 37:41.22 sha1sum 1 root 20 0 186192 13940 9500 S 0.0 0.4 0:01.60 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd 3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp",
"... ├── Example │ ├── g1 │ ├── g2 │ └── g3 ...",
"echo \"150\" > /sys/fs/cgroup/Example/g1/cpu.weight echo \"100\" > /sys/fs/cgroup/Example/g2/cpu.weight echo \"50\" > /sys/fs/cgroup/Example/g3/cpu.weight",
"echo \"33373\" > /sys/fs/cgroup/Example/g1/cgroup.procs echo \"33374\" > /sys/fs/cgroup/Example/g2/cgroup.procs echo \"33377\" > /sys/fs/cgroup/Example/g3/cgroup.procs",
"cat /proc/33373/cgroup /proc/33374/cgroup /proc/33377/cgroup 0::/Example/g1 0::/Example/g2 0::/Example/g3",
"top top - 05:17:18 up 1 day, 18:25, 1 user, load average: 3.03, 3.03, 3.00 Tasks: 95 total, 4 running, 91 sleeping, 0 stopped, 0 zombie %Cpu(s): 18.1 us, 81.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st MiB Mem : 3737.0 total, 3233.7 free, 132.8 used, 370.5 buff/cache MiB Swap: 4060.0 total, 4060.0 free, 0.0 used. 3373.1 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 33373 root 20 0 18720 1748 1460 R 49.5 0.0 415:05.87 sha1sum 33374 root 20 0 18720 1756 1464 R 32.9 0.0 412:58.33 sha1sum 33377 root 20 0 18720 1860 1568 R 16.3 0.0 411:03.12 sha1sum 760 root 20 0 416620 28540 15296 S 0.3 0.7 0:10.23 tuned 1 root 20 0 186328 14108 9484 S 0.0 0.4 0:02.00 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthread"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/using-cgroups-v2-to-control-distribution-of-cpu-time-for-applications_managing-monitoring-and-updating-the-kernel
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuration_reference/making-open-source-more-inclusive
|
Chapter 2. Planning for Bare Metal Provisioning
|
Chapter 2. Planning for Bare Metal Provisioning This chapter outlines the requirements for configuring the Bare Metal service, including installation assumptions, hardware requirements, and networking requirements. 2.1. Installation Assumptions This guide assumes that you have installed the director on the undercloud node, and are ready to install the Bare Metal service along with the rest of the overcloud. For more information on installing the director, see Installing the Undercloud . Note The Bare Metal service in the overcloud is designed for a trusted tenant environment, as the bare metal nodes have direct access to the control plane network of your OpenStack installation. If you implement a custom composable network for Ironic services in the overcloud, users do not need to access the control plane. 2.2. Hardware Requirements Overcloud Requirements The hardware requirements for an overcloud with the Bare Metal service are the same as for the standard overcloud. For more information, see Overcloud Requirements in the Director Installation and Usage guide. Bare Metal Machine Requirements The hardware requirements for bare metal machines that will be provisioned vary depending on the operating system you are installing. For Red Hat Enterprise Linux 8, see the Red Hat Enterprise Linux 8 Performing a standard RHEL installation . For Red Hat Enterprise Linux 7, see the Red Hat Enterprise Linux 7 Installation Guide . For Red Hat Enterprise Linux 6, see the Red Hat Enterprise Linux 6 Installation Guide . All bare metal machines that you want to provision require the following: A NIC to connect to the bare metal network. A power management interface (for example, IPMI) connected to a network reachable from the ironic-conductor service. By default, ironic-conductor runs on all of the controller nodes, unless you are using composable roles and running ironic-conductor elsewhere. PXE boot on the bare metal network. Disable PXE boot on all other NICs in the deployment. 2.3. Networking requirements The bare metal network: This is a private network that the Bare Metal service uses for the following operations: The provisioning and management of bare metal machines on the overcloud. Cleaning bare metal nodes before and between deployments. Tenant access to the bare metal nodes. The bare metal network provides DHCP and PXE boot functions to discover bare metal systems. This network must use a native VLAN on a trunked interface so that the Bare Metal service can serve PXE boot and DHCP requests. You can configure the bare metal network in two ways: Use a flat bare metal network for Ironic Conductor services. This network must route to the Ironic services on the control plane. If you define an isolated bare metal network, the bare metal nodes cannot PXE boot. Use a custom composable network to implement Ironic services in the overcloud. Note The Bare Metal service in the overcloud is designed for a trusted tenant environment, as the bare metal nodes have direct access to the control plane network of your OpenStack installation. If you implement a custom composable network for Ironic services in the overcloud, users do not need to access the control plane. Network tagging: The control plane network (the director's provisioning network) is always untagged. The bare metal network must be untagged for provisioning, and must also have access to the Ironic API. Other networks may be tagged. Overcloud controllers: The controller nodes with the Bare Metal service must have access to the bare metal network. Bare metal nodes: The NIC which the bare metal node is configured to PXE-boot from must have access to the bare metal network. 2.3.1. The Default Bare Metal Network In this architecture, the bare metal network is separated from the control plane network. The bare metal network is a flat network that also acts as the tenant network. The bare metal network is created by the OpenStack operator. This network requires a route to the director provisioning network. Ironic users have access to the public OpenStack APIs, and to the bare metal network. Since the bare metal network is routed to the director's provisioning network, users also have indirect access to the control plane. Ironic uses the bare metal network for node cleaning. Default bare metal network architecture diagram 2.3.2. The Custom Composable Network In this architecture, the bare metal network is a custom composable network that does not have access to the control plane. Creating this network might be preferable if you want to limit access to the control plane. The custom composable bare metal network is created by the OpenStack operator. Ironic users have access to the public OpenStack APIs, and to the custom composable bare metal network. Ironic uses the custom composable bare metal network for node cleaning. :leveloffset: +1
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/bare_metal_provisioning/sect-planning
|
Chapter 5. What to do next? Day 2
|
Chapter 5. What to do ? Day 2 As a storage administrator, once you have installed and configured Red Hat Ceph Storage 8, you are ready to perform "Day Two" operations for your storage cluster. These operations include adding metadata servers (MDS) and object gateways (RGW), and configuring services such as NFS. For more information about how to use the cephadm orchestrator to perform "Day Two" operations, refer to the Red Hat Ceph Storage 8 Operations Guide . To deploy, configure, and administer the Ceph Object Gateway on "Day Two" operations, refer to the Red Hat Ceph Storage 8 Object Gateway Guide .
| null |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/installation_guide/what-to-do-next
|
Chapter 5. Develop
|
Chapter 5. Develop 5.1. Serverless applications Serverless applications are created and deployed as Kubernetes services, defined by a route and a configuration, and contained in a YAML file. To deploy a serverless application using OpenShift Serverless, you must create a Knative Service object. Example Knative Service object YAML file apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: "Hello Serverless!" 1 The name of the application. 2 The namespace the application uses. 3 The image of the application. 4 The environment variable printed out by the sample application. You can create a serverless application by using one of the following methods: Create a Knative service from the OpenShift Container Platform web console. See the documentation about Creating applications using the Developer perspective . Create a Knative service by using the Knative ( kn ) CLI. Create and apply a Knative Service object as a YAML file, by using the oc CLI. 5.1.1. Creating serverless applications by using the Knative CLI Using the Knative ( kn ) CLI to create serverless applications provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service create command to create a basic serverless application. Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Knative service: USD kn service create <service-name> --image <image> --tag <tag-value> Where: --image is the URI of the image for the application. --tag is an optional flag that can be used to add a tag to the initial revision that is created with the service. Example command USD kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest Example output Creating service 'event-display' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration "event-display" is waiting for a Revision to become ready. 3.857s ... 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL: http://event-display-default.apps-crc.testing 5.1.2. Creating a service using offline mode You can execute kn service commands in offline mode, so that no changes happen on the cluster, and instead the service descriptor file is created on your local machine. After the descriptor file is created, you can modify the file before propagating changes to the cluster. Important The offline mode of the Knative CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. Procedure In offline mode, create a local Knative service descriptor file: USD kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest \ --target ./ \ --namespace test Example output Service 'event-display' created in namespace 'test'. The --target ./ flag enables offline mode and specifies ./ as the directory for storing the new directory tree. If you do not specify an existing directory, but use a filename, such as --target my-service.yaml , then no directory tree is created. Instead, only the service descriptor file my-service.yaml is created in the current directory. The filename can have the .yaml , .yml , or .json extension. Choosing .json creates the service descriptor file in the JSON format. The --namespace test option places the new service in the test namespace. If you do not use --namespace , and you are logged in to an OpenShift cluster, the descriptor file is created in the current namespace. Otherwise, the descriptor file is created in the default namespace. Examine the created directory structure: USD tree ./ Example output ./ └── test └── ksvc └── event-display.yaml 2 directories, 1 file The current ./ directory specified with --target contains the new test/ directory that is named after the specified namespace. The test/ directory contains the ksvc directory, named after the resource type. The ksvc directory contains the descriptor file event-display.yaml , named according to the specified service name. Examine the generated service descriptor file: USD cat test/ksvc/event-display.yaml Example output apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: event-display namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest name: "" resources: {} status: {} List information about the new service: USD kn service describe event-display --target ./ --namespace test Example output Name: event-display Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON The --target ./ option specifies the root directory for the directory structure containing namespace subdirectories. Alternatively, you can directly specify a YAML or JSON filename with the --target option. The accepted file extensions are .yaml , .yml , and .json . The --namespace option specifies the namespace, which communicates to kn the subdirectory that contains the necessary service descriptor file. If you do not use --namespace , and you are logged in to an OpenShift cluster, kn searches for the service in the subdirectory that is named after the current namespace. Otherwise, kn searches in the default/ subdirectory. Use the service descriptor file to create the service on the cluster: USD kn service create -f test/ksvc/event-display.yaml Example output Creating service 'event-display' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s ... 0.168s Configuration "event-display" is waiting for a Revision to become ready. 23.377s ... 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'event-display' created to latest revision 'event-display-00001' is available at URL: http://event-display-test.apps.example.com 5.1.3. Creating serverless applications using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a serverless application by using YAML, you must create a YAML file that defines a Knative Service object, then apply it by using oc apply . After the service is created and the application is deployed, Knative creates an immutable revision for this version of the application. Knative also performs network programming to create a route, ingress, service, and load balancer for your application and automatically scales your pods up and down based on traffic. Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Install the OpenShift CLI ( oc ). Procedure Create a YAML file containing the following sample code: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-delivery namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest env: - name: RESPONSE value: "Hello Serverless!" Navigate to the directory where the YAML file is contained, and deploy the application by applying the YAML file: USD oc apply -f <filename> 5.1.4. Verifying your serverless application deployment To verify that your serverless application has been deployed successfully, you must get the application URL created by Knative, and then send a request to that URL and observe the output. OpenShift Serverless supports the use of both HTTP and HTTPS URLs, however the output from oc get ksvc always prints URLs using the http:// format. Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the oc CLI. You have created a Knative service. Prerequisites Install the OpenShift CLI ( oc ). Procedure Find the application URL: USD oc get ksvc <service_name> Example output NAME URL LATESTCREATED LATESTREADY READY REASON event-delivery http://event-delivery-default.example.com event-delivery-4wsd2 event-delivery-4wsd2 True Make a request to your cluster and observe the output. Example HTTP request USD curl http://event-delivery-default.example.com Example HTTPS request USD curl https://event-delivery-default.example.com Example output Hello Serverless! Optional. If you receive an error relating to a self-signed certificate in the certificate chain, you can add the --insecure flag to the curl command to ignore the error: USD curl https://event-delivery-default.example.com --insecure Example output Hello Serverless! Important Self-signed certificates must not be used in a production deployment. This method is only for testing purposes. Optional. If your OpenShift Container Platform cluster is configured with a certificate that is signed by a certificate authority (CA) but not yet globally configured for your system, you can specify this with the curl command. The path to the certificate can be passed to the curl command by using the --cacert flag: USD curl https://event-delivery-default.example.com --cacert <file> Example output Hello Serverless! 5.1.5. Interacting with a serverless application using HTTP2 and gRPC OpenShift Serverless supports only insecure or edge-terminated routes. Insecure or edge-terminated routes do not support HTTP2 on OpenShift Container Platform. These routes also do not support gRPC because gRPC is transported by HTTP2. If you use these protocols in your application, you must call the application using the ingress gateway directly. To do this you must find the ingress gateway's public address and the application's specific host. Important This method needs to expose Kourier Gateway using the LoadBalancer service type. You can configure this by adding the following YAML to your KnativeServing custom resource definition (CRD): ... spec: ingress: kourier: service-type: LoadBalancer ... Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. Install the OpenShift CLI ( oc ). You have created a Knative service. Procedure Find the application host. See the instructions in Verifying your serverless application deployment . Find the ingress gateway's public address: USD oc -n knative-serving-ingress get svc kourier Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kourier LoadBalancer 172.30.51.103 a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com 80:31380/TCP,443:31390/TCP 67m The public address is surfaced in the EXTERNAL-IP field, and in this case is a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com . Manually set the host header of your HTTP request to the application's host, but direct the request itself against the public address of the ingress gateway. USD curl -H "Host: hello-default.example.com" a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com Example output Hello Serverless! You can also make a gRPC request by setting the authority to the application's host, while directing the request against the ingress gateway directly: grpc.Dial( "a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com:80", grpc.WithAuthority("hello-default.example.com:80"), grpc.WithInsecure(), ) Note Ensure that you append the respective port, 80 by default, to both hosts as shown in the example. 5.1.6. Enabling communication with Knative applications on a cluster with restrictive network policies If you are using a cluster that multiple users have access to, your cluster might use network policies to control which pods, services, and namespaces can communicate with each other over the network. If your cluster uses restrictive network policies, it is possible that Knative system pods are not able to access your Knative application. For example, if your namespace has the following network policy, which denies all requests, Knative system pods cannot access your Knative application: Example NetworkPolicy object that denies all requests to the namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: example-namespace spec: podSelector: ingress: [] To allow access to your applications from Knative system pods, you must add a label to each of the Knative system namespaces, and then create a NetworkPolicy object in your application namespace that allows access to the namespace for other namespaces that have this label. Important A network policy that denies requests to non-Knative services on your cluster still prevents access to these services. However, by allowing access from Knative system namespaces to your Knative application, you are allowing access to your Knative application from all namespaces in the cluster. If you do not want to allow access to your Knative application from all namespaces on the cluster, you might want to use JSON Web Token authentication for Knative services instead. JSON Web Token authentication for Knative services requires Service Mesh. Prerequisites Install the OpenShift CLI ( oc ). OpenShift Serverless Operator and Knative Serving are installed on your cluster. Procedure Add the knative.openshift.io/system-namespace=true label to each Knative system namespace that requires access to your application: Label the knative-serving namespace: USD oc label namespace knative-serving knative.openshift.io/system-namespace=true Label the knative-serving-ingress namespace: USD oc label namespace knative-serving-ingress knative.openshift.io/system-namespace=true Label the knative-eventing namespace: USD oc label namespace knative-eventing knative.openshift.io/system-namespace=true Label the knative-kafka namespace: USD oc label namespace knative-kafka knative.openshift.io/system-namespace=true Create a NetworkPolicy object in your application namespace to allow access from namespaces with the knative.openshift.io/system-namespace label: Example NetworkPolicy object apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: <network_policy_name> 1 namespace: <namespace> 2 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/system-namespace: "true" podSelector: {} policyTypes: - Ingress 1 Provide a name for your network policy. 2 The namespace where your application exists. 5.1.7. Configuring init containers Init containers are specialized containers that are run before application containers in a pod. They are generally used to implement initialization logic for an application, which may include running setup scripts or downloading required configurations. Note Init containers may cause longer application start-up times and should be used with caution for serverless applications, which are expected to scale up and down frequently. Multiple init containers are supported in a single Knative service spec. Knative provides a default, configurable naming template if a template name is not provided. The init containers template can be set by adding an appropriate value in a Knative Service object spec. Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. Before you can use init containers for Knative services, an administrator must add the kubernetes.podspec-init-containers flag to the KnativeServing custom resource (CR). See the OpenShift Serverless "Global configuration" documentation for more information. Procedure Add the initContainers spec to a Knative Service object: Example service spec apiVersion: serving.knative.dev/v1 kind: Service ... spec: template: spec: initContainers: - imagePullPolicy: IfNotPresent 1 image: <image_uri> 2 volumeMounts: 3 - name: data mountPath: /data ... 1 The image pull policy when the image is downloaded. 2 The URI for the init container image. 3 The location where volumes are mounted within the container file system. 5.1.8. HTTPS redirection per service You can enable or disable HTTPS redirection for a service by configuring the networking.knative.dev/http-option annotation. The following example shows how you can use this annotation in a Knative Service YAML object: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example namespace: default annotations: networking.knative.dev/http-option: "redirected" spec: ... 5.1.9. Additional resources Knative Serving CLI commands Configuring JSON Web Token authentication for Knative services 5.2. Autoscaling Knative Serving provides automatic scaling, or autoscaling , for applications to match incoming demand. For example, if an application is receiving no traffic, and scale-to-zero is enabled, Knative Serving scales the application down to zero replicas. If scale-to-zero is disabled, the application is scaled down to the minimum number of replicas configured for applications on the cluster. Replicas can also be scaled up to meet demand if traffic to the application increases. Autoscaling settings for Knative services can be global settings that are configured by cluster administrators, or per-revision settings that are configured for individual services. You can modify per-revision settings for your services by using the OpenShift Container Platform web console, by modifying the YAML file for your service, or by using the Knative ( kn ) CLI. Note Any limits or targets that you set for a service are measured against a single instance of your application. For example, setting the target annotation to 50 configures the autoscaler to scale the application so that each revision handles 50 requests at a time. 5.2.1. Scale bounds Scale bounds determine the minimum and maximum numbers of replicas that can serve an application at any given time. You can set scale bounds for an application to help prevent cold starts or control computing costs. 5.2.1.1. Minimum scale bounds The minimum number of replicas that can serve an application is determined by the min-scale annotation. If scale to zero is not enabled, the min-scale value defaults to 1 . The min-scale value defaults to 0 replicas if the following conditions are met: The min-scale annotation is not set Scaling to zero is enabled The class KPA is used Example service spec with min-scale annotation apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/min-scale: "0" ... 5.2.1.1.1. Setting the min-scale annotation by using the Knative CLI Using the Knative ( kn ) CLI to set the min-scale annotation provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service command with the --scale-min flag to create or modify the min-scale value for a service. Prerequisites Knative Serving is installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure Set the minimum number of replicas for the service by using the --scale-min flag: USD kn service create <service_name> --image <image_uri> --scale-min <integer> Example command USD kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-min 2 5.2.1.2. Maximum scale bounds The maximum number of replicas that can serve an application is determined by the max-scale annotation. If the max-scale annotation is not set, there is no upper limit for the number of replicas created. Example service spec with max-scale annotation apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/max-scale: "10" ... 5.2.1.2.1. Setting the max-scale annotation by using the Knative CLI Using the Knative ( kn ) CLI to set the max-scale annotation provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service command with the --scale-max flag to create or modify the max-scale value for a service. Prerequisites Knative Serving is installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure Set the maximum number of replicas for the service by using the --scale-max flag: USD kn service create <service_name> --image <image_uri> --scale-max <integer> Example command USD kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-max 10 5.2.2. Concurrency Concurrency determines the number of simultaneous requests that can be processed by each replica of an application at any given time. Concurrency can be configured as a soft limit or a hard limit : A soft limit is a targeted requests limit, rather than a strictly enforced bound. For example, if there is a sudden burst of traffic, the soft limit target can be exceeded. A hard limit is a strictly enforced upper bound requests limit. If concurrency reaches the hard limit, surplus requests are buffered and must wait until there is enough free capacity to execute the requests. Important Using a hard limit configuration is only recommended if there is a clear use case for it with your application. Having a low, hard limit specified may have a negative impact on the throughput and latency of an application, and might cause cold starts. Adding a soft target and a hard limit means that the autoscaler targets the soft target number of concurrent requests, but imposes a hard limit of the hard limit value for the maximum number of requests. If the hard limit value is less than the soft limit value, the soft limit value is tuned down, because there is no need to target more requests than the number that can actually be handled. 5.2.2.1. Configuring a soft concurrency target A soft limit is a targeted requests limit, rather than a strictly enforced bound. For example, if there is a sudden burst of traffic, the soft limit target can be exceeded. You can specify a soft concurrency target for your Knative service by setting the autoscaling.knative.dev/target annotation in the spec, or by using the kn service command with the correct flags. Procedure Optional: Set the autoscaling.knative.dev/target annotation for your Knative service in the spec of the Service custom resource: Example service spec apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target: "200" Optional: Use the kn service command to specify the --concurrency-target flag: USD kn service create <service_name> --image <image_uri> --concurrency-target <integer> Example command to create a service with a concurrency target of 50 requests USD kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-target 50 5.2.2.2. Configuring a hard concurrency limit A hard concurrency limit is a strictly enforced upper bound requests limit. If concurrency reaches the hard limit, surplus requests are buffered and must wait until there is enough free capacity to execute the requests. You can specify a hard concurrency limit for your Knative service by modifying the containerConcurrency spec, or by using the kn service command with the correct flags. Procedure Optional: Set the containerConcurrency spec for your Knative service in the spec of the Service custom resource: Example service spec apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: spec: containerConcurrency: 50 The default value is 0 , which means that there is no limit on the number of simultaneous requests that are permitted to flow into one replica of the service at a time. A value greater than 0 specifies the exact number of requests that are permitted to flow into one replica of the service at a time. This example would enable a hard concurrency limit of 50 requests. Optional: Use the kn service command to specify the --concurrency-limit flag: USD kn service create <service_name> --image <image_uri> --concurrency-limit <integer> Example command to create a service with a concurrency limit of 50 requests USD kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-limit 50 5.2.2.3. Concurrency target utilization This value specifies the percentage of the concurrency limit that is actually targeted by the autoscaler. This is also known as specifying the hotness at which a replica runs, which enables the autoscaler to scale up before the defined hard limit is reached. For example, if the containerConcurrency value is set to 10, and the target-utilization-percentage value is set to 70 percent, the autoscaler creates a new replica when the average number of concurrent requests across all existing replicas reaches 7. Requests numbered 7 to 10 are still sent to the existing replicas, but additional replicas are started in anticipation of being required after the containerConcurrency value is reached. Example service configured using the target-utilization-percentage annotation apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target-utilization-percentage: "70" ... 5.3. Traffic management In a Knative application, traffic can be managed by creating a traffic split. A traffic split is configured as part of a route, which is managed by a Knative service. Configuring a route allows requests to be sent to different revisions of a service. This routing is determined by the traffic spec of the Service object. A traffic spec declaration consists of one or more revisions, each responsible for handling a portion of the overall traffic. The percentages of traffic routed to each revision must add up to 100%, which is ensured by a Knative validation. The revisions specified in a traffic spec can either be a fixed, named revision, or can point to the "latest" revision, which tracks the head of the list of all revisions for the service. The "latest" revision is a type of floating reference that updates if a new revision is created. Each revision can have a tag attached that creates an additional access URL for that revision. The traffic spec can be modified by: Editing the YAML of a Service object directly. Using the Knative ( kn ) CLI --traffic flag. Using the OpenShift Container Platform web console. When you create a Knative service, it does not have any default traffic spec settings. 5.3.1. Traffic spec examples The following example shows a traffic spec where 100% of traffic is routed to the latest revision of the service. Under status , you can see the name of the latest revision that latestRevision resolves to: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - latestRevision: true percent: 100 status: ... traffic: - percent: 100 revisionName: example-service The following example shows a traffic spec where 100% of traffic is routed to the revision tagged as current , and the name of that revision is specified as example-service . The revision tagged as latest is kept available, even though no traffic is routed to it: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - tag: current revisionName: example-service percent: 100 - tag: latest latestRevision: true percent: 0 The following example shows how the list of revisions in the traffic spec can be extended so that traffic is split between multiple revisions. This example sends 50% of traffic to the revision tagged as current , and 50% of traffic to the revision tagged as candidate . The revision tagged as latest is kept available, even though no traffic is routed to it: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - tag: current revisionName: example-service-1 percent: 50 - tag: candidate revisionName: example-service-2 percent: 50 - tag: latest latestRevision: true percent: 0 5.3.2. Knative CLI traffic management flags The Knative ( kn ) CLI supports traffic operations on the traffic block of a service as part of the kn service update command. The following table displays a summary of traffic splitting flags, value formats, and the operation the flag performs. The Repetition column denotes whether repeating the particular value of flag is allowed in a kn service update command. Flag Value(s) Operation Repetition --traffic RevisionName=Percent Gives Percent traffic to RevisionName Yes --traffic Tag=Percent Gives Percent traffic to the revision having Tag Yes --traffic @latest=Percent Gives Percent traffic to the latest ready revision No --tag RevisionName=Tag Gives Tag to RevisionName Yes --tag @latest=Tag Gives Tag to the latest ready revision No --untag Tag Removes Tag from revision Yes 5.3.2.1. Multiple flags and order precedence All traffic-related flags can be specified using a single kn service update command. kn defines the precedence of these flags. The order of the flags specified when using the command is not taken into account. The precedence of the flags as they are evaluated by kn are: --untag : All the referenced revisions with this flag are removed from the traffic block. --tag : Revisions are tagged as specified in the traffic block. --traffic : The referenced revisions are assigned a portion of the traffic split. You can add tags to revisions and then split traffic according to the tags you have set. 5.3.2.2. Custom URLs for revisions Assigning a --tag flag to a service by using the kn service update command creates a custom URL for the revision that is created when you update the service. The custom URL follows the pattern https://<tag>-<service_name>-<namespace>.<domain> or http://<tag>-<service_name>-<namespace>.<domain> . The --tag and --untag flags use the following syntax: Require one value. Denote a unique tag in the traffic block of the service. Can be specified multiple times in one command. 5.3.2.2.1. Example: Assign a tag to a revision The following example assigns the tag latest to a revision named example-revision : USD kn service update <service_name> --tag @latest=example-tag 5.3.2.2.2. Example: Remove a tag from a revision You can remove a tag to remove the custom URL, by using the --untag flag. Note If a revision has its tags removed, and it is assigned 0% of the traffic, the revision is removed from the traffic block entirely. The following command removes all tags from the revision named example-revision : USD kn service update <service_name> --untag example-tag 5.3.3. Creating a traffic split by using the Knative CLI Using the Knative ( kn ) CLI to create traffic splits provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service update command to split traffic between revisions of a service. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. You have created a Knative service. Procedure Specify the revision of your service and what percentage of traffic you want to route to it by using the --traffic tag with a standard kn service update command: Example command USD kn service update <service_name> --traffic <revision>=<percentage> Where: <service_name> is the name of the Knative service that you are configuring traffic routing for. <revision> is the revision that you want to configure to receive a percentage of traffic. You can either specify the name of the revision, or a tag that you assigned to the revision by using the --tag flag. <percentage> is the percentage of traffic that you want to send to the specified revision. Optional: The --traffic flag can be specified multiple times in one command. For example, if you have a revision tagged as @latest and a revision named stable , you can specify the percentage of traffic that you want to split to each revision as follows: Example command USD kn service update example-service --traffic @latest=20,stable=80 If you have multiple revisions and do not specify the percentage of traffic that should be split to the last revision, the --traffic flag can calculate this automatically. For example, if you have a third revision named example , and you use the following command: Example command USD kn service update example-service --traffic @latest=10,stable=60 The remaining 30% of traffic is split to the example revision, even though it was not specified. 5.3.4. Managing traffic between revisions by using the OpenShift Container Platform web console After you create a serverless application, the application is displayed in the Topology view of the Developer perspective in the OpenShift Container Platform web console. The application revision is represented by the node, and the Knative service is indicated by a quadrilateral around the node. Any new change in the code or the service configuration creates a new revision, which is a snapshot of the code at a given time. For a service, you can manage the traffic between the revisions of the service by splitting and routing it to the different revisions as required. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have logged in to the OpenShift Container Platform web console. Procedure To split traffic between multiple revisions of an application in the Topology view: Click the Knative service to see its overview in the side panel. Click the Resources tab, to see a list of Revisions and Routes for the service. Figure 5.1. Serverless application Click the service, indicated by the S icon at the top of the side panel, to see an overview of the service details. Click the YAML tab and modify the service configuration in the YAML editor, and click Save . For example, change the timeoutseconds from 300 to 301 . This change in the configuration triggers a new revision. In the Topology view, the latest revision is displayed and the Resources tab for the service now displays the two revisions. In the Resources tab, click Set Traffic Distribution to see the traffic distribution dialog box: Add the split traffic percentage portion for the two revisions in the Splits field. Add tags to create custom URLs for the two revisions. Click Save to see two nodes representing the two revisions in the Topology view. Figure 5.2. Serverless application revisions 5.3.5. Routing and managing traffic by using a blue-green deployment strategy You can safely reroute traffic from a production version of an app to a new version, by using a blue-green deployment strategy . Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. Install the OpenShift CLI ( oc ). Procedure Create and deploy an app as a Knative service. Find the name of the first revision that was created when you deployed the service, by viewing the output from the following command: USD oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}' Example command USD oc get ksvc example-service -o=jsonpath='{.status.latestCreatedRevisionName}' Example output USD example-service-00001 Add the following YAML to the service spec to send inbound traffic to the revision: ... spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic goes to this revision ... Verify that you can view your app at the URL output you get from running the following command: USD oc get ksvc <service_name> Deploy a second revision of your app by modifying at least one field in the template spec of the service and redeploying it. For example, you can modify the image of the service, or an env environment variable. You can redeploy the service by applying the service YAML file, or by using the kn service update command if you have installed the Knative ( kn ) CLI. Find the name of the second, latest revision that was created when you redeployed the service, by running the command: USD oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}' At this point, both the first and second revisions of the service are deployed and running. Update your existing service to create a new, test endpoint for the second revision, while still sending all other traffic to the first revision: Example of updated service spec with test endpoint ... spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic is still being routed to the first revision - revisionName: <second_revision_name> percent: 0 # No traffic is routed to the second revision tag: v2 # A named route ... After you redeploy this service by reapplying the YAML resource, the second revision of the app is now staged. No traffic is routed to the second revision at the main URL, and Knative creates a new service named v2 for testing the newly deployed revision. Get the URL of the new service for the second revision, by running the following command: USD oc get ksvc <service_name> --output jsonpath="{.status.traffic[*].url}" You can use this URL to validate that the new version of the app is behaving as expected before you route any traffic to it. Update your existing service again, so that 50% of traffic is sent to the first revision, and 50% is sent to the second revision: Example of updated service spec splitting traffic 50/50 between revisions ... spec: traffic: - revisionName: <first_revision_name> percent: 50 - revisionName: <second_revision_name> percent: 50 tag: v2 ... When you are ready to route all traffic to the new version of the app, update the service again to send 100% of traffic to the second revision: Example of updated service spec sending all traffic to the second revision ... spec: traffic: - revisionName: <first_revision_name> percent: 0 - revisionName: <second_revision_name> percent: 100 tag: v2 ... Tip You can remove the first revision instead of setting it to 0% of traffic if you do not plan to roll back the revision. Non-routeable revision objects are then garbage-collected. Visit the URL of the first revision to verify that no more traffic is being sent to the old version of the app. 5.4. Routing Knative leverages OpenShift Container Platform TLS termination to provide routing for Knative services. When a Knative service is created, a OpenShift Container Platform route is automatically created for the service. This route is managed by the OpenShift Serverless Operator. The OpenShift Container Platform route exposes the Knative service through the same domain as the OpenShift Container Platform cluster. You can disable Operator control of OpenShift Container Platform routing so that you can configure a Knative route to directly use your TLS certificates instead. Knative routes can also be used alongside the OpenShift Container Platform route to provide additional fine-grained routing capabilities, such as traffic splitting. 5.4.1. Customizing labels and annotations for OpenShift Container Platform routes OpenShift Container Platform routes support the use of custom labels and annotations, which you can configure by modifying the metadata spec of a Knative service. Custom labels and annotations are propagated from the service to the Knative route, then to the Knative ingress, and finally to the OpenShift Container Platform route. Prerequisites You must have the OpenShift Serverless Operator and Knative Serving installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Procedure Create a Knative service that contains the label or annotation that you want to propagate to the OpenShift Container Platform route: To create a service by using YAML: Example service created by using YAML apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> labels: <label_name>: <label_value> annotations: <annotation_name>: <annotation_value> ... To create a service by using the Knative ( kn ) CLI, enter: Example service created by using a kn command USD kn service create <service_name> \ --image=<image> \ --annotation <annotation_name>=<annotation_value> \ --label <label_value>=<label_value> Verify that the OpenShift Container Platform route has been created with the annotation or label that you added by inspecting the output from the following command: Example command for verification USD oc get routes.route.openshift.io \ -l serving.knative.openshift.io/ingressName=<service_name> \ 1 -l serving.knative.openshift.io/ingressNamespace=<service_namespace> \ 2 -n knative-serving-ingress -o yaml \ | grep -e "<label_name>: \"<label_value>\"" -e "<annotation_name>: <annotation_value>" 3 1 Use the name of your service. 2 Use the namespace where your service was created. 3 Use your values for the label and annotation names and values. 5.4.2. Configuring OpenShift Container Platform routes for Knative services If you want to configure a Knative service to use your TLS certificate on OpenShift Container Platform, you must disable the automatic creation of a route for the service by the OpenShift Serverless Operator and instead manually create a route for the service. Note When you complete the following procedure, the default OpenShift Container Platform route in the knative-serving-ingress namespace is not created. However, the Knative route for the application is still created in this namespace. Prerequisites The OpenShift Serverless Operator and Knative Serving component must be installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Procedure Create a Knative service that includes the serving.knative.openshift.io/disableRoute=true annotation: Important The serving.knative.openshift.io/disableRoute=true annotation instructs OpenShift Serverless to not automatically create a route for you. However, the service still shows a URL and reaches a status of Ready . This URL does not work externally until you create your own route with the same hostname as the hostname in the URL. Create a Knative Service resource: Example resource apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> annotations: serving.knative.openshift.io/disableRoute: "true" spec: template: spec: containers: - image: <image> ... Apply the Service resource: USD oc apply -f <filename> Optional. Create a Knative service by using the kn service create command: Example kn command USD kn service create <service_name> \ --image=gcr.io/knative-samples/helloworld-go \ --annotation serving.knative.openshift.io/disableRoute=true Verify that no OpenShift Container Platform route has been created for the service: Example command USD USD oc get routes.route.openshift.io \ -l serving.knative.openshift.io/ingressName=USDKSERVICE_NAME \ -l serving.knative.openshift.io/ingressNamespace=USDKSERVICE_NAMESPACE \ -n knative-serving-ingress You will see the following output: No resources found in knative-serving-ingress namespace. Create a Route resource in the knative-serving-ingress namespace: apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 600s 1 name: <route_name> 2 namespace: knative-serving-ingress 3 spec: host: <service_host> 4 port: targetPort: http2 to: kind: Service name: kourier weight: 100 tls: insecureEdgeTerminationPolicy: Allow termination: edge 5 key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE---- wildcardPolicy: None 1 The timeout value for the OpenShift Container Platform route. You must set the same value as the max-revision-timeout-seconds setting ( 600s by default). 2 The name of the OpenShift Container Platform route. 3 The namespace for the OpenShift Container Platform route. This must be knative-serving-ingress . 4 The hostname for external access. You can set this to <service_name>-<service_namespace>.<domain> . 5 The certificates you want to use. Currently, only edge termination is supported. Apply the Route resource: USD oc apply -f <filename> 5.4.3. Setting cluster availability to cluster local By default, Knative services are published to a public IP address. Being published to a public IP address means that Knative services are public applications, and have a publicly accessible URL. Publicly accessible URLs are accessible from outside of the cluster. However, developers may need to build back-end services that are only be accessible from inside the cluster, known as private services . Developers can label individual services in the cluster with the networking.knative.dev/visibility=cluster-local label to make them private. Important For OpenShift Serverless 1.15.0 and newer versions, the serving.knative.dev/visibility label is no longer available. You must update existing services to use the networking.knative.dev/visibility label instead. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have created a Knative service. Procedure Set the visibility for your service by adding the networking.knative.dev/visibility=cluster-local label: USD oc label ksvc <service_name> networking.knative.dev/visibility=cluster-local Verification Check that the URL for your service is now in the format http://<service_name>.<namespace>.svc.cluster.local , by entering the following command and reviewing the output: USD oc get ksvc Example output NAME URL LATESTCREATED LATESTREADY READY REASON hello http://hello.default.svc.cluster.local hello-tx2g7 hello-tx2g7 True 5.4.4. Additional resources Route-specific annotations 5.5. Event sinks When you create an event source, you can specify a sink where events are sent to from the source. A sink is an addressable or a callable resource that can receive incoming events from other resources. Knative services, channels and brokers are all examples of sinks. Addressable objects receive and acknowledge an event delivered over HTTP to an address defined in their status.address.url field. As a special case, the core Kubernetes Service object also fulfills the addressable interface. Callable objects are able to receive an event delivered over HTTP and transform the event, returning 0 or 1 new events in the HTTP response. These returned events may be further processed in the same way that events from an external event source are processed. 5.5.1. Knative CLI sink flag When you create an event source by using the Knative ( kn ) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources. The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local , as the sink: Example command using the sink flag USD kn source binding create bind-heartbeat \ --namespace sinkbinding-example \ --subject "Job:batch/v1:app=heartbeat-cron" \ --sink http://event-display.svc.cluster.local \ 1 --ce-override "sink=bound" 1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel , and broker . Tip You can configure which CRs can be used with the --sink flag for Knative ( kn ) CLI commands by Customizing kn . 5.5.2. Connect an event source to a sink using the Developer perspective When you create an event source by using the OpenShift Container Platform web console, you can specify a sink where events are sent to from that resource. The sink can be any addressable or callable resource that can receive incoming events from other resources. Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Developer perspective. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created a sink, such as a Knative service, channel or broker. Procedure Create an event source of any type, by navigating to +Add Event Sources and then selecting the event source type that you want to create. In the Sink section of the Create Event Source form view, select your sink in the Resource list. Click Create . Verification You can verify that the event source was created and is connected to the sink by viewing the Topology page. In the Developer perspective, navigate to Topology . View the event source and click on the connected sink to see the sink details in the side panel. 5.5.3. Connecting a trigger to a sink You can connect a trigger to a sink, so that events from a broker are filtered before they are sent to the sink. A sink that is connected to a trigger is configured as a subscriber in the Trigger object's resource spec. Example of a Trigger object connected to a Kafka sink apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> 1 spec: ... subscriber: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <kafka_sink_name> 2 1 The name of the trigger being connected to the sink. 2 The name of a KafkaSink object. 5.6. Event delivery You can configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. Configuring event delivery parameters, including a dead letter sink, ensures that any events that fail to be delivered to an event sink are retried. Otherwise, undelivered events are dropped. 5.6.1. Event delivery behavior patterns for channels and brokers Different channel and broker types have their own behavior patterns that are followed for event delivery. 5.6.1.1. Knative Kafka channels and brokers If an event is successfully delivered to a Kafka channel or broker receiver, the receiver responds with a 202 status code, which means that the event has been safely stored inside a Kafka topic and is not lost. If the receiver responds with any other status code, the event is not safely stored, and steps must be taken by the user to resolve the issue. 5.6.2. Configurable event delivery parameters The following parameters can be configured for event delivery: Dead letter sink You can configure the deadLetterSink delivery parameter so that if an event fails to be delivered, it is stored in the specified event sink. Undelivered events that are not stored in a dead letter sink are dropped. The dead letter sink be any addressable object that conforms to the Knative Eventing sink contract, such as a Knative service, a Kubernetes service, or a URI. Retries You can set a minimum number of times that the delivery must be retried before the event is sent to the dead letter sink, by configuring the retry delivery parameter with an integer value. Back off delay You can set the backoffDelay delivery parameter to specify the time delay before an event delivery retry is attempted after a failure. The duration of the backoffDelay parameter is specified using the ISO 8601 format. For example, PT1S specifies a 1 second delay. Back off policy The backoffPolicy delivery parameter can be used to specify the retry back off policy. The policy can be specified as either linear or exponential . When using the linear back off policy, the back off delay is equal to backoffDelay * <numberOfRetries> . When using the exponential backoff policy, the back off delay is equal to backoffDelay*2^<numberOfRetries> . 5.6.3. Examples of configuring event delivery parameters You can configure event delivery parameters for Broker , Trigger , Channel , and Subscription objects. If you configure event delivery parameters for a broker or channel, these parameters are propagated to triggers or subscriptions created for those objects. You can also set event delivery parameters for triggers or subscriptions to override the settings for the broker or channel. Example Broker object apiVersion: eventing.knative.dev/v1 kind: Broker metadata: ... spec: delivery: deadLetterSink: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer> ... Example Trigger object apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: ... spec: broker: <broker_name> delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer> ... Example Channel object apiVersion: messaging.knative.dev/v1 kind: Channel metadata: ... spec: delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer> ... Example Subscription object apiVersion: messaging.knative.dev/v1 kind: Subscription metadata: ... spec: channel: apiVersion: messaging.knative.dev/v1 kind: Channel name: <channel_name> delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer> ... 5.6.4. Configuring event delivery ordering for triggers If you are using a Kafka broker, you can configure the delivery order of events from triggers to event sinks. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and Knative Kafka are installed on your OpenShift Container Platform cluster. Kafka broker is enabled for use on your cluster, and you have created a Kafka broker. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift ( oc ) CLI. Procedure Create or modify a Trigger object and set the kafka.eventing.knative.dev/delivery.order annotation: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered ... The supported consumer delivery guarantees are: unordered An unordered consumer is a non-blocking consumer that delivers messages unordered, while preserving proper offset management. ordered An ordered consumer is a per-partition blocking consumer that waits for a successful response from the CloudEvent subscriber before it delivers the message of the partition. The default ordering guarantee is unordered . Apply the Trigger object: USD oc apply -f <filename> 5.7. Listing event sources and event source types It is possible to view a list of all event sources or event source types that exist or are available for use on your OpenShift Container Platform cluster. You can use the Knative ( kn ) CLI or the Developer perspective in the OpenShift Container Platform web console to list available event sources or event source types. 5.7.1. Listing available event source types by using the Knative CLI Using the Knative ( kn ) CLI provides a streamlined and intuitive user interface to view available event source types on your cluster. You can list event source types that can be created and used on your cluster by using the kn source list-types CLI command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure List the available event source types in the terminal: USD kn source list-types Example output TYPE NAME DESCRIPTION ApiServerSource apiserversources.sources.knative.dev Watch and send Kubernetes API events to a sink PingSource pingsources.sources.knative.dev Periodically send ping events to a sink SinkBinding sinkbindings.sources.knative.dev Binding for connecting a PodSpecable to a sink Optional: You can also list the available event source types in YAML format: USD kn source list-types -o yaml 5.7.2. Viewing available event source types within the Developer perspective It is possible to view a list of all available event source types on your cluster. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to view available event source types. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Access the Developer perspective. Click +Add . Click Event source . View the available event source types. 5.7.3. Listing available event sources by using the Knative CLI Using the Knative ( kn ) CLI provides a streamlined and intuitive user interface to view existing event sources on your cluster. You can list existing event sources by using the kn source list command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure List the existing event sources in the terminal: USD kn source list Example output NAME TYPE RESOURCE SINK READY a1 ApiServerSource apiserversources.sources.knative.dev ksvc:eshow2 True b1 SinkBinding sinkbindings.sources.knative.dev ksvc:eshow3 False p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True Optional: You can list event sources of a specific type only, by using the --type flag: USD kn source list --type <event_source_type> Example command USD kn source list --type PingSource Example output NAME TYPE RESOURCE SINK READY p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True 5.8. Creating an API server source The API server source is an event source that can be used to connect an event sink, such as a Knative service, to the Kubernetes API server. The API server source watches for Kubernetes events and forwards them to the Knative Eventing broker. 5.8.1. Creating an API server source by using the web console After Knative Eventing is installed on your cluster, you can create an API server source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure If you want to re-use an existing service account, you can modify your existing ServiceAccount resource to include the required permissions instead of creating a new resource. Create a service account, role, and role binding for the event source as a YAML file: apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - "" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4 1 2 3 4 Change this namespace to the namespace that you have selected for installing the event source. Apply the YAML file: USD oc apply -f <filename> In the Developer perspective, navigate to +Add Event Source . The Event Sources page is displayed. Optional: If you have multiple providers for your event sources, select the required provider from the Providers list to filter the available event sources from the provider. Select ApiServerSource and then click Create Event Source . The Create Event Source page is displayed. Configure the ApiServerSource settings by using the Form view or YAML view : Note You can switch between the Form view and YAML view . The data is persisted when switching between the views. Enter v1 as the APIVERSION and Event as the KIND . Select the Service Account Name for the service account that you created. Select the Sink for the event source. A Sink can be either a Resource , such as a channel, broker, or service, or a URI . Click Create . Verification After you have created the API server source, you will see it connected to the service it is sinked to in the Topology view. Note If a URI sink is used, modify the URI by right-clicking on URI sink Edit URI . Deleting the API server source Navigate to the Topology view. Right-click the API server source and select Delete ApiServerSource . 5.8.2. Creating an API server source by using the Knative CLI You can use the kn source apiserver create command to create an API server source by using the kn CLI. Using the kn CLI to create an API server source provides a more streamlined and intuitive user interface than modifying YAML files directly. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). You have installed the Knative ( kn ) CLI. Procedure If you want to re-use an existing service account, you can modify your existing ServiceAccount resource to include the required permissions instead of creating a new resource. Create a service account, role, and role binding for the event source as a YAML file: apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - "" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4 1 2 3 4 Change this namespace to the namespace that you have selected for installing the event source. Apply the YAML file: USD oc apply -f <filename> Create an API server source that has an event sink. In the following example, the sink is a broker: USD kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource "event:v1" --service-account <service_account_name> --mode Resource To check that the API server source is set up correctly, create a Knative service that dumps incoming messages to its log: USD kn service create <service_name> --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest If you used a broker as an event sink, create a trigger to filter events from the default broker to the service: USD kn trigger create <trigger_name> --sink ksvc:<service_name> Create events by launching a pod in the default namespace: USD oc create deployment hello-node --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest Check that the controller is mapped correctly by inspecting the output generated by the following command: USD kn source apiserver describe <source_name> Example output Name: mysource Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 3m ServiceAccountName: events-sa Mode: Resource Sink: Name: default Namespace: default Kind: Broker (eventing.knative.dev/v1) Resources: Kind: event (v1) Controller: false Conditions: OK TYPE AGE REASON ++ Ready 3m ++ Deployed 3m ++ SinkProvided 3m ++ SufficientPermissions 3m ++ EventTypesProvided 3m Verification You can verify that the Kubernetes events were sent to Knative by looking at the message dumper function logs. Get the pods: USD oc get pods View the message dumper function logs for the pods: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json ... Data, { "apiVersion": "v1", "involvedObject": { "apiVersion": "v1", "fieldPath": "spec.containers{hello-node}", "kind": "Pod", "name": "hello-node", "namespace": "default", ..... }, "kind": "Event", "message": "Started container", "metadata": { "name": "hello-node.159d7608e3a3572c", "namespace": "default", .... }, "reason": "Started", ... } Deleting the API server source Delete the trigger: USD kn trigger delete <trigger_name> Delete the event source: USD kn source apiserver delete <source_name> Delete the service account, cluster role, and cluster binding: USD oc delete -f authentication.yaml 5.8.2.1. Knative CLI sink flag When you create an event source by using the Knative ( kn ) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources. The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local , as the sink: Example command using the sink flag USD kn source binding create bind-heartbeat \ --namespace sinkbinding-example \ --subject "Job:batch/v1:app=heartbeat-cron" \ --sink http://event-display.svc.cluster.local \ 1 --ce-override "sink=bound" 1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel , and broker . 5.8.3. Creating an API server source by using YAML files Creating Knative resources by using YAML files uses a declarative API, which enables you to describe event sources declaratively and in a reproducible manner. To create an API server source by using YAML, you must create a YAML file that defines an ApiServerSource object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created the default broker in the same namespace as the one defined in the API server source YAML file. Install the OpenShift CLI ( oc ). Procedure If you want to re-use an existing service account, you can modify your existing ServiceAccount resource to include the required permissions instead of creating a new resource. Create a service account, role, and role binding for the event source as a YAML file: apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - "" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4 1 2 3 4 Change this namespace to the namespace that you have selected for installing the event source. Apply the YAML file: USD oc apply -f <filename> Create an API server source as a YAML file: apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: name: testevents spec: serviceAccountName: events-sa mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default Apply the ApiServerSource YAML file: USD oc apply -f <filename> To check that the API server source is set up correctly, create a Knative service as a YAML file that dumps incoming messages to its log: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest Apply the Service YAML file: USD oc apply -f <filename> Create a Trigger object as a YAML file that filters events from the default broker to the service created in the step: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: event-display-trigger namespace: default spec: broker: default subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display Apply the Trigger YAML file: USD oc apply -f <filename> Create events by launching a pod in the default namespace: USD oc create deployment hello-node --image=quay.io/openshift-knative/knative-eventing-sources-event-display Check that the controller is mapped correctly, by entering the following command and inspecting the output: USD oc get apiserversource.sources.knative.dev testevents -o yaml Example output apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: annotations: creationTimestamp: "2020-04-07T17:24:54Z" generation: 1 name: testevents namespace: default resourceVersion: "62868" selfLink: /apis/sources.knative.dev/v1alpha1/namespaces/default/apiserversources/testevents2 uid: 1603d863-bb06-4d1c-b371-f580b4db99fa spec: mode: Resource resources: - apiVersion: v1 controller: false controllerSelector: apiVersion: "" kind: "" name: "" uid: "" kind: Event labelSelector: {} serviceAccountName: events-sa sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default Verification To verify that the Kubernetes events were sent to Knative, you can look at the message dumper function logs. Get the pods by entering the following command: USD oc get pods View the message dumper function logs for the pods by entering the following command: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json ... Data, { "apiVersion": "v1", "involvedObject": { "apiVersion": "v1", "fieldPath": "spec.containers{hello-node}", "kind": "Pod", "name": "hello-node", "namespace": "default", ..... }, "kind": "Event", "message": "Started container", "metadata": { "name": "hello-node.159d7608e3a3572c", "namespace": "default", .... }, "reason": "Started", ... } Deleting the API server source Delete the trigger: USD oc delete -f trigger.yaml Delete the event source: USD oc delete -f k8s-events.yaml Delete the service account, cluster role, and cluster binding: USD oc delete -f authentication.yaml 5.9. Creating a ping source A ping source is an event source that can be used to periodically send ping events with a constant payload to an event consumer. A ping source can be used to schedule sending events, similar to a timer. 5.9.1. Creating a ping source by using the web console After Knative Eventing is installed on your cluster, you can create a ping source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure To verify that the ping source is working, create a simple Knative service that dumps incoming messages to the logs of the service. In the Developer perspective, navigate to +Add YAML . Copy the example YAML: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest Click Create . Create a ping source in the same namespace as the service created in the step, or any other sink that you want to send events to. In the Developer perspective, navigate to +Add Event Source . The Event Sources page is displayed. Optional: If you have multiple providers for your event sources, select the required provider from the Providers list to filter the available event sources from the provider. Select Ping Source and then click Create Event Source . The Create Event Source page is displayed. Note You can configure the PingSource settings by using the Form view or YAML view and can switch between the views. The data is persisted when switching between the views. Enter a value for Schedule . In this example, the value is */2 * * * * , which creates a PingSource that sends a message every two minutes. Optional: You can enter a value for Data , which is the message payload. Select a Sink . This can be either a Resource or a URI . In this example, the event-display service created in the step is used as the Resource sink. Click Create . Verification You can verify that the ping source was created and is connected to the sink by viewing the Topology page. In the Developer perspective, navigate to Topology . View the ping source and sink. Deleting the ping source Navigate to the Topology view. Right-click the API server source and select Delete Ping Source . 5.9.2. Creating a ping source by using the Knative CLI You can use the kn source ping create command to create a ping source by using the Knative ( kn ) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. Prerequisites The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Optional: If you want to use the verification steps for this procedure, install the OpenShift CLI ( oc ). Procedure To verify that the ping source is working, create a simple Knative service that dumps incoming messages to the service logs: USD kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest For each set of ping events that you want to request, create a ping source in the same namespace as the event consumer: USD kn source ping create test-ping-source \ --schedule "*/2 * * * *" \ --data '{"message": "Hello world!"}' \ --sink ksvc:event-display Check that the controller is mapped correctly by entering the following command and inspecting the output: USD kn source ping describe test-ping-source Example output Name: test-ping-source Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 15s Schedule: */2 * * * * Data: {"message": "Hello world!"} Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 8s ++ Deployed 8s ++ SinkProvided 15s ++ ValidSchedule 15s ++ EventTypeProvided 15s ++ ResourcesCorrect 15s Verification You can verify that the Kubernetes events were sent to the Knative event sink by looking at the logs of the sink pod. By default, Knative services terminate their pods if no traffic is received within a 60 second period. The example shown in this guide creates a ping source that sends a message every 2 minutes, so each message should be observed in a newly created pod. Watch for new pods created: USD watch oc get pods Cancel watching the pods using Ctrl+C, then look at the logs of the created pod: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9 time: 2020-04-07T16:16:00.000601161Z datacontenttype: application/json Data, { "message": "Hello world!" } Deleting the ping source Delete the ping source: USD kn delete pingsources.sources.knative.dev <ping_source_name> 5.9.2.1. Knative CLI sink flag When you create an event source by using the Knative ( kn ) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources. The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local , as the sink: Example command using the sink flag USD kn source binding create bind-heartbeat \ --namespace sinkbinding-example \ --subject "Job:batch/v1:app=heartbeat-cron" \ --sink http://event-display.svc.cluster.local \ 1 --ce-override "sink=bound" 1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel , and broker . 5.9.3. Creating a ping source by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe event sources declaratively and in a reproducible manner. To create a serverless ping source by using YAML, you must create a YAML file that defines a PingSource object, then apply it by using oc apply . Example PingSource object apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: "*/2 * * * *" 1 data: '{"message": "Hello world!"}' 2 sink: 3 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display 1 The schedule of the event specified using CRON expression . 2 The event message body expressed as a JSON encoded data string. 3 These are the details of the event consumer. In this example, we are using a Knative service named event-display . Prerequisites The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure To verify that the ping source is working, create a simple Knative service that dumps incoming messages to the service's logs. Create a service YAML file: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest Create the service: USD oc apply -f <filename> For each set of ping events that you want to request, create a ping source in the same namespace as the event consumer. Create a YAML file for the ping source: apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: "*/2 * * * *" data: '{"message": "Hello world!"}' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display Create the ping source: USD oc apply -f <filename> Check that the controller is mapped correctly by entering the following command: USD oc get pingsource.sources.knative.dev <ping_source_name> -oyaml Example output apiVersion: sources.knative.dev/v1 kind: PingSource metadata: annotations: sources.knative.dev/creator: developer sources.knative.dev/lastModifier: developer creationTimestamp: "2020-04-07T16:11:14Z" generation: 1 name: test-ping-source namespace: default resourceVersion: "55257" selfLink: /apis/sources.knative.dev/v1/namespaces/default/pingsources/test-ping-source uid: 3d80d50b-f8c7-4c1b-99f7-3ec00e0a8164 spec: data: '{ value: "hello" }' schedule: '*/2 * * * *' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default Verification You can verify that the Kubernetes events were sent to the Knative event sink by looking at the sink pod's logs. By default, Knative services terminate their pods if no traffic is received within a 60 second period. The example shown in this guide creates a PingSource that sends a message every 2 minutes, so each message should be observed in a newly created pod. Watch for new pods created: USD watch oc get pods Cancel watching the pods using Ctrl+C, then look at the logs of the created pod: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 042ff529-240e-45ee-b40c-3a908129853e time: 2020-04-07T16:22:00.000791674Z datacontenttype: application/json Data, { "message": "Hello world!" } Deleting the ping source Delete the ping source: USD oc delete -f <filename> Example command USD oc delete -f ping-source.yaml 5.10. Custom event sources If you need to ingress events from an event producer that is not included in Knative, or from a producer that emits events which are not in the CloudEvent format, you can do this by creating a custom event source. You can create a custom event source by using one of the following methods: Use a PodSpecable object as an event source, by creating a sink binding. Use a container as an event source, by creating a container source. 5.10.1. Sink binding The SinkBinding object supports decoupling event production from delivery addressing. Sink binding is used to connect event producers to an event consumer, or sink . An event producer is a Kubernetes resource that embeds a PodSpec template and produces events. A sink is an addressable Kubernetes object that can receive events. The SinkBinding object injects environment variables into the PodTemplateSpec of the sink, which means that the application code does not need to interact directly with the Kubernetes API to locate the event destination. These environment variables are as follows: K_SINK The URL of the resolved sink. K_CE_OVERRIDES A JSON object that specifies overrides to the outbound event. 5.10.1.1. Creating a sink binding by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe event sources declaratively and in a reproducible manner. To create a sink binding by using YAML, you must create a YAML file that defines an SinkBinding object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure To check that sink binding is set up correctly, create a Knative event display service, or event sink, that dumps incoming messages to its log. Create a service YAML file: Example service YAML file apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest Create the service: USD oc apply -f <filename> Create a sink binding instance that directs events to the service. Create a sink binding YAML file: Example service YAML file apiVersion: sources.knative.dev/v1alpha1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job 1 selector: matchLabels: app: heartbeat-cron sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display 1 In this example, any Job with the label app: heartbeat-cron will be bound to the event sink. Create the sink binding: USD oc apply -f <filename> Create a CronJob object. Create a cron job YAML file: Example cron job YAML file apiVersion: batch/v1beta1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: "* * * * *" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: "true" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: "true" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace Important To use sink binding, you must manually add a bindings.knative.dev/include=true label to your Knative resources. For example, to add this label to a CronJob resource, add the following lines to the Job resource YAML definition: jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: "true" Create the cron job: USD oc apply -f <filename> Check that the controller is mapped correctly by entering the following command and inspecting the output: USD oc get sinkbindings.sources.knative.dev bind-heartbeat -oyaml Example output spec: sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: app: heartbeat-cron Verification You can verify that the Kubernetes events were sent to the Knative event sink by looking at the message dumper function logs. Enter the command: USD oc get pods Enter the command: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { "id": 1, "label": "" } 5.10.1.2. Creating a sink binding by using the Knative CLI You can use the kn source binding create command to create a sink binding by using the Knative ( kn ) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. Prerequisites The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Install the Knative ( kn ) CLI. Install the OpenShift CLI ( oc ). Note The following procedure requires you to create YAML files. If you change the names of the YAML files from those used in the examples, you must ensure that you also update the corresponding CLI commands. Procedure To check that sink binding is set up correctly, create a Knative event display service, or event sink, that dumps incoming messages to its log: USD kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest Create a sink binding instance that directs events to the service: USD kn source binding create bind-heartbeat --subject Job:batch/v1:app=heartbeat-cron --sink ksvc:event-display Create a CronJob object. Create a cron job YAML file: Example cron job YAML file apiVersion: batch/v1beta1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: "* * * * *" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: "true" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: "true" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace Important To use sink binding, you must manually add a bindings.knative.dev/include=true label to your Knative CRs. For example, to add this label to a CronJob CR, add the following lines to the Job CR YAML definition: jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: "true" Create the cron job: USD oc apply -f <filename> Check that the controller is mapped correctly by entering the following command and inspecting the output: USD kn source binding describe bind-heartbeat Example output Name: bind-heartbeat Namespace: demo-2 Annotations: sources.knative.dev/creator=minikube-user, sources.knative.dev/lastModifier=minikub ... Age: 2m Subject: Resource: job (batch/v1) Selector: app: heartbeat-cron Sink: Name: event-display Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m Verification You can verify that the Kubernetes events were sent to the Knative event sink by looking at the message dumper function logs. View the message dumper function logs by entering the following commands: USD oc get pods USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { "id": 1, "label": "" } 5.10.1.2.1. Knative CLI sink flag When you create an event source by using the Knative ( kn ) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources. The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local , as the sink: Example command using the sink flag USD kn source binding create bind-heartbeat \ --namespace sinkbinding-example \ --subject "Job:batch/v1:app=heartbeat-cron" \ --sink http://event-display.svc.cluster.local \ 1 --ce-override "sink=bound" 1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel , and broker . 5.10.1.3. Creating a sink binding by using the web console After Knative Eventing is installed on your cluster, you can create a sink binding by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Knative service to use as a sink: In the Developer perspective, navigate to +Add YAML . Copy the example YAML: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest Click Create . Create a CronJob resource that is used as an event source and sends an event every minute. In the Developer perspective, navigate to +Add YAML . Copy the example YAML: apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: "*/1 * * * *" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: true 1 spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats args: - --period=1 env: - name: ONE_SHOT value: "true" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace 1 Ensure that you include the bindings.knative.dev/include: true label. The default namespace selection behavior of OpenShift Serverless uses inclusion mode. Click Create . Create a sink binding in the same namespace as the service created in the step, or any other sink that you want to send events to. In the Developer perspective, navigate to +Add Event Source . The Event Sources page is displayed. Optional: If you have multiple providers for your event sources, select the required provider from the Providers list to filter the available event sources from the provider. Select Sink Binding and then click Create Event Source . The Create Event Source page is displayed. Note You can configure the Sink Binding settings by using the Form view or YAML view and can switch between the views. The data is persisted when switching between the views. In the apiVersion field enter batch/v1 . In the Kind field enter Job . Note The CronJob kind is not supported directly by OpenShift Serverless sink binding, so the Kind field must target the Job objects created by the cron job, rather than the cron job object itself. Select a Sink . This can be either a Resource or a URI . In this example, the event-display service created in the step is used as the Resource sink. In the Match labels section: Enter app in the Name field. Enter heartbeat-cron in the Value field. Note The label selector is required when using cron jobs with sink binding, rather than the resource name. This is because jobs created by a cron job do not have a predictable name, and contain a randomly generated string in their name. For example, hearthbeat-cron-1cc23f . Click Create . Verification You can verify that the sink binding, sink, and cron job have been created and are working correctly by viewing the Topology page and pod logs. In the Developer perspective, navigate to Topology . View the sink binding, sink, and heartbeats cron job. Observe that successful jobs are being registered by the cron job once the sink binding is added. This means that the sink binding is successfully reconfiguring the jobs created by the cron job. Browse the logs of the event-display service pod to see events produced by the heartbeats cron job. 5.10.1.4. Sink binding reference You can use a PodSpecable object as an event source by creating a sink binding. You can configure multiple parameters when creating a SinkBinding object. SinkBinding objects support the following parameters: Field Description Required or optional apiVersion Specifies the API version, for example sources.knative.dev/v1 . Required kind Identifies this resource object as a SinkBinding object. Required metadata Specifies metadata that uniquely identifies the SinkBinding object. For example, a name . Required spec Specifies the configuration information for this SinkBinding object. Required spec.sink A reference to an object that resolves to a URI to use as the sink. Required spec.subject References the resources for which the runtime contract is augmented by binding implementations. Required spec.ceOverrides Defines overrides to control the output format and modifications to the event sent to the sink. Optional 5.10.1.4.1. Subject parameter The Subject parameter references the resources for which the runtime contract is augmented by binding implementations. You can configure multiple fields for a Subject definition. The Subject definition supports the following fields: Field Description Required or optional apiVersion API version of the referent. Required kind Kind of the referent. Required namespace Namespace of the referent. If omitted, this defaults to the namespace of the object. Optional name Name of the referent. Do not use if you configure selector . selector Selector of the referents. Do not use if you configure name . selector.matchExpressions A list of label selector requirements. Only use one of either matchExpressions or matchLabels . selector.matchExpressions.key The label key that the selector applies to. Required if using matchExpressions . selector.matchExpressions.operator Represents a key's relationship to a set of values. Valid operators are In , NotIn , Exists and DoesNotExist . Required if using matchExpressions . selector.matchExpressions.values An array of string values. If the operator parameter value is In or NotIn , the values array must be non-empty. If the operator parameter value is Exists or DoesNotExist , the values array must be empty. This array is replaced during a strategic merge patch. Required if using matchExpressions . selector.matchLabels A map of key-value pairs. Each key-value pair in the matchLabels map is equivalent to an element of matchExpressions , where the key field is matchLabels.<key> , the operator is In , and the values array contains only matchLabels.<value> . Only use one of either matchExpressions or matchLabels . Subject parameter examples Given the following YAML, the Deployment object named mysubject in the default namespace is selected: apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: apps/v1 kind: Deployment namespace: default name: mysubject ... Given the following YAML, any Job object with the label working=example in the default namespace is selected: apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: working: example ... Given the following YAML, any Pod object with the label working=example or working=sample in the default namespace is selected: apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: v1 kind: Pod namespace: default selector: - matchExpression: key: working operator: In values: - example - sample ... 5.10.1.4.2. CloudEvent overrides A ceOverrides definition provides overrides that control the CloudEvent's output format and modifications sent to the sink. You can configure multiple fields for the ceOverrides definition. A ceOverrides definition supports the following fields: Field Description Required or optional extensions Specifies which attributes are added or overridden on the outbound event. Each extensions key-value pair is set independently on the event as an attribute extension. Optional Note Only valid CloudEvent attribute names are allowed as extensions. You cannot set the spec defined attributes from the extensions override configuration. For example, you can not modify the type attribute. CloudEvent Overrides example apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: ... ceOverrides: extensions: extra: this is an extra attribute additional: 42 This sets the K_CE_OVERRIDES environment variable on the subject : Example output { "extensions": { "extra": "this is an extra attribute", "additional": "42" } } 5.10.1.4.3. The include label To use a sink binding, you need to do assign the bindings.knative.dev/include: "true" label to either the resource or the namespace that the resource is included in. If the resource definition does not include the label, a cluster administrator can attach it to the namespace by running: USD oc label namespace <namespace> bindings.knative.dev/include=true 5.10.2. Container source Container sources create a container image that generates events and sends events to a sink. You can use a container source to create a custom event source, by creating a container image and a ContainerSource object that uses your image URI. 5.10.2.1. Guidelines for creating a container image Two environment variables are injected by the container source controller: K_SINK and K_CE_OVERRIDES . These variables are resolved from the sink and ceOverrides spec, respectively. Events are sent to the sink URI specified in the K_SINK environment variable. The message must be sent as a POST using the CloudEvent HTTP format. Example container images The following is an example of a heartbeats container image: package main import ( "context" "encoding/json" "flag" "fmt" "log" "os" "strconv" "time" duckv1 "knative.dev/pkg/apis/duck/v1" cloudevents "github.com/cloudevents/sdk-go/v2" "github.com/kelseyhightower/envconfig" ) type Heartbeat struct { Sequence int `json:"id"` Label string `json:"label"` } var ( eventSource string eventType string sink string label string periodStr string ) func init() { flag.StringVar(&eventSource, "eventSource", "", "the event-source (CloudEvents)") flag.StringVar(&eventType, "eventType", "dev.knative.eventing.samples.heartbeat", "the event-type (CloudEvents)") flag.StringVar(&sink, "sink", "", "the host url to heartbeat to") flag.StringVar(&label, "label", "", "a special label") flag.StringVar(&periodStr, "period", "5", "the number of seconds between heartbeats") } type envConfig struct { // Sink URL where to send heartbeat cloud events Sink string `envconfig:"K_SINK"` // CEOverrides are the CloudEvents overrides to be applied to the outbound event. CEOverrides string `envconfig:"K_CE_OVERRIDES"` // Name of this pod. Name string `envconfig:"POD_NAME" required:"true"` // Namespace this pod exists in. Namespace string `envconfig:"POD_NAMESPACE" required:"true"` // Whether to run continuously or exit. OneShot bool `envconfig:"ONE_SHOT" default:"false"` } func main() { flag.Parse() var env envConfig if err := envconfig.Process("", &env); err != nil { log.Printf("[ERROR] Failed to process env var: %s", err) os.Exit(1) } if env.Sink != "" { sink = env.Sink } var ceOverrides *duckv1.CloudEventOverrides if len(env.CEOverrides) > 0 { overrides := duckv1.CloudEventOverrides{} err := json.Unmarshal([]byte(env.CEOverrides), &overrides) if err != nil { log.Printf("[ERROR] Unparseable CloudEvents overrides %s: %v", env.CEOverrides, err) os.Exit(1) } ceOverrides = &overrides } p, err := cloudevents.NewHTTP(cloudevents.WithTarget(sink)) if err != nil { log.Fatalf("failed to create http protocol: %s", err.Error()) } c, err := cloudevents.NewClient(p, cloudevents.WithUUIDs(), cloudevents.WithTimeNow()) if err != nil { log.Fatalf("failed to create client: %s", err.Error()) } var period time.Duration if p, err := strconv.Atoi(periodStr); err != nil { period = time.Duration(5) * time.Second } else { period = time.Duration(p) * time.Second } if eventSource == "" { eventSource = fmt.Sprintf("https://knative.dev/eventing-contrib/cmd/heartbeats/#%s/%s", env.Namespace, env.Name) log.Printf("Heartbeats Source: %s", eventSource) } if len(label) > 0 && label[0] == '"' { label, _ = strconv.Unquote(label) } hb := &Heartbeat{ Sequence: 0, Label: label, } ticker := time.NewTicker(period) for { hb.Sequence++ event := cloudevents.NewEvent("1.0") event.SetType(eventType) event.SetSource(eventSource) event.SetExtension("the", 42) event.SetExtension("heart", "yes") event.SetExtension("beats", true) if ceOverrides != nil && ceOverrides.Extensions != nil { for n, v := range ceOverrides.Extensions { event.SetExtension(n, v) } } if err := event.SetData(cloudevents.ApplicationJSON, hb); err != nil { log.Printf("failed to set cloudevents data: %s", err.Error()) } log.Printf("sending cloudevent to %s", sink) if res := c.Send(context.Background(), event); !cloudevents.IsACK(res) { log.Printf("failed to send cloudevent: %v", res) } if env.OneShot { return } // Wait for tick <-ticker.C } } The following is an example of a container source that references the heartbeats container image: apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: # This corresponds to a heartbeats image URI that you have built and published - image: gcr.io/knative-releases/knative.dev/eventing/cmd/heartbeats name: heartbeats args: - --period=1 env: - name: POD_NAME value: "example-pod" - name: POD_NAMESPACE value: "event-test" sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: example-service ... 5.10.2.2. Creating and managing container sources by using the Knative CLI You can use the kn source container commands to create and manage container sources by using the Knative ( kn ) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. Create a container source USD kn source container create <container_source_name> --image <image_uri> --sink <sink> Delete a container source USD kn source container delete <container_source_name> Describe a container source USD kn source container describe <container_source_name> List existing container sources USD kn source container list List existing container sources in YAML format USD kn source container list -o yaml Update a container source This command updates the image URI for an existing container source: USD kn source container update <container_source_name> --image <image_uri> 5.10.2.3. Creating a container source by using the web console After Knative Eventing is installed on your cluster, you can create a container source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to +Add Event Source . The Event Sources page is displayed. Select Container Source and then click Create Event Source . The Create Event Source page is displayed. Configure the Container Source settings by using the Form view or YAML view : Note You can switch between the Form view and YAML view . The data is persisted when switching between the views. In the Image field, enter the URI of the image that you want to run in the container created by the container source. In the Name field, enter the name of the image. Optional: In the Arguments field, enter any arguments to be passed to the container. Optional: In the Environment variables field, add any environment variables to set in the container. In the Sink section, add a sink where events from the container source are routed to. If you are using the Form view, you can choose from the following options: Select Resource to use a channel, broker, or service as a sink for the event source. Select URI to specify where the events from the container source are routed to. After you have finished configuring the container source, click Create . 5.10.2.4. Container source reference You can use a container as an event source, by creating a ContainerSource object. You can configure multiple parameters when creating a ContainerSource object. ContainerSource objects support the following fields: Field Description Required or optional apiVersion Specifies the API version, for example sources.knative.dev/v1 . Required kind Identifies this resource object as a ContainerSource object. Required metadata Specifies metadata that uniquely identifies the ContainerSource object. For example, a name . Required spec Specifies the configuration information for this ContainerSource object. Required spec.sink A reference to an object that resolves to a URI to use as the sink. Required spec.template A template spec for the ContainerSource object. Required spec.ceOverrides Defines overrides to control the output format and modifications to the event sent to the sink. Optional Template parameter example apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: - image: quay.io/openshift-knative/heartbeats:latest name: heartbeats args: - --period=1 env: - name: POD_NAME value: "mypod" - name: POD_NAMESPACE value: "event-test" ... 5.10.2.4.1. CloudEvent overrides A ceOverrides definition provides overrides that control the CloudEvent's output format and modifications sent to the sink. You can configure multiple fields for the ceOverrides definition. A ceOverrides definition supports the following fields: Field Description Required or optional extensions Specifies which attributes are added or overridden on the outbound event. Each extensions key-value pair is set independently on the event as an attribute extension. Optional Note Only valid CloudEvent attribute names are allowed as extensions. You cannot set the spec defined attributes from the extensions override configuration. For example, you can not modify the type attribute. CloudEvent Overrides example apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: ... ceOverrides: extensions: extra: this is an extra attribute additional: 42 This sets the K_CE_OVERRIDES environment variable on the subject : Example output { "extensions": { "extra": "this is an extra attribute", "additional": "42" } } 5.11. Creating channels Channels are custom resources that define a single event-forwarding and persistence layer. After events have been sent to a channel from an event source or producer, these events can be sent to multiple Knative services or other sinks by using a subscription. You can create channels by instantiating a supported Channel object, and configure re-delivery attempts by modifying the delivery spec in a Subscription object. 5.11.1. Creating a channel by using the web console Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a channel. After Knative Eventing is installed on your cluster, you can create a channel by using the web console. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to +Add Channel . Select the type of Channel object that you want to create in the Type list. Click Create . Verification Confirm that the channel now exists by navigating to the Topology page. 5.11.2. Creating a channel by using the Knative CLI Using the Knative ( kn ) CLI to create channels provides a more streamlined and intuitive user interface than modifying YAML files directly. You can use the kn channel create command to create a channel. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a channel: USD kn channel create <channel_name> --type <channel_type> The channel type is optional, but where specified, must be given in the format Group:Version:Kind . For example, you can create an InMemoryChannel object: USD kn channel create mychannel --type messaging.knative.dev:v1:InMemoryChannel Example output Channel 'mychannel' created in namespace 'default'. Verification To confirm that the channel now exists, list the existing channels and inspect the output: USD kn channel list Example output kn channel list NAME TYPE URL AGE READY REASON mychannel InMemoryChannel http://mychannel-kn-channel.default.svc.cluster.local 93s True Deleting a channel Delete a channel: USD kn channel delete <channel_name> 5.11.3. Creating a default implementation channel by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe channels declaratively and in a reproducible manner. To create a serverless channel by using YAML, you must create a YAML file that defines a Channel object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Channel object as a YAML file: apiVersion: messaging.knative.dev/v1 kind: Channel metadata: name: example-channel namespace: default Apply the YAML file: USD oc apply -f <filename> 5.11.4. Creating a Kafka channel by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe channels declaratively and in a reproducible manner. You can create a Knative Eventing channel that is backed by Kafka topics by creating a Kafka channel. To create a Kafka channel by using YAML, you must create a YAML file that defines a KafkaChannel object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a KafkaChannel object as a YAML file: apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel metadata: name: example-channel namespace: default spec: numPartitions: 3 replicationFactor: 1 Important Only the v1beta1 version of the API for KafkaChannel objects on OpenShift Serverless is supported. Do not use the v1alpha1 version of this API, as this version is now deprecated. Apply the KafkaChannel YAML file: USD oc apply -f <filename> 5.11.5. steps After you have created a channel, create a subscription that allows event sinks to subscribe to channels and receive events. Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. See Examples of configuring event delivery parameters . 5.12. Creating and managing subscriptions After you have created a channel and an event sink, you can create a subscription to enable event delivery. Subscriptions are created by configuring a Subscription object, which specifies the channel and the sink (also known as a subscriber ) to deliver events to. 5.12.1. Creating a subscription by using the web console After you have created a channel and an event sink, you can create a subscription to enable event delivery. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a subscription. Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console. You have created an event sink, such as a Knative service, and a channel. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to the Topology page. Create a subscription using one of the following methods: Hover over the channel that you want to create a subscription for, and drag the arrow. The Add Subscription option is displayed. Select your sink in the Subscriber list. Click Add . If the service is available in the Topology view under the same namespace or project as the channel, click on the channel that you want to create a subscription for, and drag the arrow directly to a service to immediately create a subscription from the channel to that service. Verification After the subscription has been created, you can see it represented as a line that connects the channel to the service in the Topology view: 5.12.2. Creating a subscription by using YAML After you have created a channel and an event sink, you can create a subscription to enable event delivery. Creating Knative resources by using YAML files uses a declarative API, which enables you to describe subscriptions declaratively and in a reproducible manner. To create a subscription by using YAML, you must create a YAML file that defines a Subscription object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Subscription object: Create a YAML file and copy the following sample code into it: apiVersion: messaging.knative.dev/v1beta1 kind: Subscription metadata: name: my-subscription 1 namespace: default spec: channel: 2 apiVersion: messaging.knative.dev/v1beta1 kind: Channel name: example-channel delivery: 3 deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: error-handler subscriber: 4 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display 1 Name of the subscription. 2 Configuration settings for the channel that the subscription connects to. 3 Configuration settings for event delivery. This tells the subscription what happens to events that cannot be delivered to the subscriber. When this is configured, events that failed to be consumed are sent to the deadLetterSink . The event is dropped, no re-delivery of the event is attempted, and an error is logged in the system. The deadLetterSink value must be a Destination . 4 Configuration settings for the subscriber. This is the event sink that events are delivered to from the channel. Apply the YAML file: USD oc apply -f <filename> 5.12.3. Creating a subscription by using the Knative CLI After you have created a channel and an event sink, you can create a subscription to enable event delivery. Using the Knative ( kn ) CLI to create subscriptions provides a more streamlined and intuitive user interface than modifying YAML files directly. You can use the kn subscription create command with the appropriate flags to create a subscription. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a subscription to connect a sink to a channel: USD kn subscription create <subscription_name> \ --channel <group:version:kind>:<channel_name> \ 1 --sink <sink_prefix>:<sink_name> \ 2 --sink-dead-letter <sink_prefix>:<sink_name> 3 1 --channel specifies the source for cloud events that should be processed. You must provide the channel name. If you are not using the default InMemoryChannel channel that is backed by the Channel custom resource, you must prefix the channel name with the <group:version:kind> for the specified channel type. For example, this will be messaging.knative.dev:v1beta1:KafkaChannel for a Kafka backed channel. 2 --sink specifies the target destination to which the event should be delivered. By default, the <sink_name> is interpreted as a Knative service of this name, in the same namespace as the subscription. You can specify the type of the sink by using one of the following prefixes: ksvc A Knative service. channel A channel that should be used as destination. Only default channel types can be referenced here. broker An Eventing broker. 3 Optional: --sink-dead-letter is an optional flag that can be used to specify a sink which events should be sent to in cases where events fail to be delivered. For more information, see the OpenShift Serverless Event delivery documentation. Example command USD kn subscription create mysubscription --channel mychannel --sink ksvc:event-display Example output Subscription 'mysubscription' created in namespace 'default'. Verification To confirm that the channel is connected to the event sink, or subscriber , by a subscription, list the existing subscriptions and inspect the output: USD kn subscription list Example output NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True Deleting a subscription Delete a subscription: USD kn subscription delete <subscription_name> 5.12.4. Describing subscriptions by using the Knative CLI You can use the kn subscription describe command to print information about a subscription in the terminal by using the Knative ( kn ) CLI. Using the Knative CLI to describe subscriptions provides a more streamlined and intuitive user interface than viewing YAML files directly. Prerequisites You have installed the Knative ( kn ) CLI. You have created a subscription in your cluster. Procedure Describe a subscription: USD kn subscription describe <subscription_name> Example output Name: my-subscription Namespace: default Annotations: messaging.knative.dev/creator=openshift-user, messaging.knative.dev/lastModifier=min ... Age: 43s Channel: Channel:my-channel (messaging.knative.dev/v1) Subscriber: URI: http://edisplay.default.example.com Reply: Name: default Resource: Broker (eventing.knative.dev/v1) DeadLetterSink: Name: my-sink Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 43s ++ AddedToChannel 43s ++ ChannelReady 43s ++ ReferencesResolved 43s 5.12.5. Listing subscriptions by using the Knative CLI You can use the kn subscription list command to list existing subscriptions on your cluster by using the Knative ( kn ) CLI. Using the Knative CLI to list subscriptions provides a streamlined and intuitive user interface. Prerequisites You have installed the Knative ( kn ) CLI. Procedure List subscriptions on your cluster: USD kn subscription list Example output NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True 5.12.6. Updating subscriptions by using the Knative CLI You can use the kn subscription update command as well as the appropriate flags to update a subscription from the terminal by using the Knative ( kn ) CLI. Using the Knative CLI to update subscriptions provides a more streamlined and intuitive user interface than updating YAML files directly. Prerequisites You have installed the Knative ( kn ) CLI. You have created a subscription. Procedure Update a subscription: USD kn subscription update <subscription_name> \ --sink <sink_prefix>:<sink_name> \ 1 --sink-dead-letter <sink_prefix>:<sink_name> 2 1 --sink specifies the updated target destination to which the event should be delivered. You can specify the type of the sink by using one of the following prefixes: ksvc A Knative service. channel A channel that should be used as destination. Only default channel types can be referenced here. broker An Eventing broker. 2 Optional: --sink-dead-letter is an optional flag that can be used to specify a sink which events should be sent to in cases where events fail to be delivered. For more information, see the OpenShift Serverless Event delivery documentation. Example command USD kn subscription update mysubscription --sink ksvc:event-display 5.12.7. steps Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. See Examples of configuring event delivery parameters . 5.13. Creating brokers Knative provides a default, channel-based broker implementation. This channel-based broker can be used for development and testing purposes, but does not provide adequate event delivery guarantees for production environments. If a cluster administrator has configured your OpenShift Serverless deployment to use Kafka as the default broker type, creating a broker by using the default settings creates a Kafka-based broker. If your OpenShift Serverless deployment is not configured to use Kafka broker as the default broker type, the channel-based broker is created when you use the default settings in the following procedures. Important Kafka broker is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 5.13.1. Creating a broker by using the Knative CLI Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Using the Knative ( kn ) CLI to create brokers provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn broker create command to create a broker. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a broker: USD kn broker create <broker_name> Verification Use the kn command to list all existing brokers: USD kn broker list Example output NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists: 5.13.2. Creating a broker by annotating a trigger Brokers can be used in combination with triggers to deliver events from an event source to an event sink. You can create a broker by adding the eventing.knative.dev/injection: enabled annotation to a Trigger object. Important If you create a broker by using the eventing.knative.dev/injection: enabled annotation, you cannot delete this broker without cluster administrator permissions. If you delete the broker without having a cluster administrator remove this annotation first, the broker is created again after deletion. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Trigger object as a YAML file that has the eventing.knative.dev/injection: enabled annotation: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: annotations: eventing.knative.dev/injection: enabled name: <trigger_name> spec: broker: default subscriber: 1 ref: apiVersion: serving.knative.dev/v1 kind: Service name: <service_name> 1 Specify details about the event sink, or subscriber , that the trigger sends events to. Apply the Trigger YAML file: USD oc apply -f <filename> Verification You can verify that the broker has been created successfully by using the oc CLI, or by observing it in the Topology view in the web console. Enter the following oc command to get the broker: USD oc -n <namespace> get broker default Example output NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists: 5.13.3. Creating a broker by labeling a namespace Brokers can be used in combination with triggers to deliver events from an event source to an event sink. You can create the default broker automatically by labelling a namespace that you own or have write permissions for. Note Brokers created using this method are not removed if you remove the label. You must manually delete them. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Label a namespace with eventing.knative.dev/injection=enabled : USD oc label namespace <namespace> eventing.knative.dev/injection=enabled Verification You can verify that the broker has been created successfully by using the oc CLI, or by observing it in the Topology view in the web console. Use the oc command to get the broker: USD oc -n <namespace> get broker <broker_name> Example command USD oc -n default get broker default Example output NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists: 5.13.4. Deleting a broker that was created by injection If you create a broker by injection and later want to delete it, you must delete it manually. Brokers created by using a namespace label or trigger annotation are not deleted permanently if you remove the label or annotation. Prerequisites Install the OpenShift CLI ( oc ). Procedure Remove the eventing.knative.dev/injection=enabled label from the namespace: USD oc label namespace <namespace> eventing.knative.dev/injection- Removing the annotation prevents Knative from recreating the broker after you delete it. Delete the broker from the selected namespace: USD oc -n <namespace> delete broker <broker_name> Verification Use the oc command to get the broker: USD oc -n <namespace> get broker <broker_name> Example command USD oc -n default get broker default Example output No resources found. Error from server (NotFound): brokers.eventing.knative.dev "default" not found 5.13.5. Creating a Kafka broker when it is not configured as the default broker type If your OpenShift Serverless deployment is not configured to use Kafka broker as the default broker type, you can use one of the following procedures to create a Kafka-based broker. Important Kafka broker is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 5.13.5.1. Creating a Kafka broker by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka broker by using YAML, you must create a YAML file that defines a Broker object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure Create a Kafka-based broker as a YAML file: apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 name: example-kafka-broker spec: config: apiVersion: v1 kind: ConfigMap name: kafka-broker-config 2 namespace: knative-eventing 1 The broker class. If not specified, brokers use the default class as configured by cluster administrators. To use the Kafka broker, this value must be Kafka . 2 The default config map for Knative Kafka brokers. This config map is created when the Kafka broker functionality is enabled on the cluster by a cluster administrator. Apply the Kafka-based broker YAML file: USD oc apply -f <filename> 5.13.5.2. Creating a Kafka broker that uses an externally managed Kafka topic If you want to use a Kafka broker without allowing it to create its own internal topic, you can use an externally managed Kafka topic instead. To do this, you must create a Kafka Broker object that uses the kafka.eventing.knative.dev/external.topic annotation. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster. You have access to a Kafka instance such as Red Hat AMQ Streams , and have created a Kafka topic. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure Create a Kafka-based broker as a YAML file: apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 kafka.eventing.knative.dev/external.topic: <topic_name> 2 ... 1 The broker class. If not specified, brokers use the default class as configured by cluster administrators. To use the Kafka broker, this value must be Kafka . 2 The name of the Kafka topic that you want to use. Apply the Kafka-based broker YAML file: USD oc apply -f <filename> 5.13.6. Managing brokers The Knative ( kn ) CLI provides commands that can be used to describe and list existing brokers. 5.13.6.1. Listing existing brokers by using the Knative CLI Using the Knative ( kn ) CLI to list brokers provides a streamlined and intuitive user interface. You can use the kn broker list command to list existing brokers in your cluster by using the Knative CLI. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. Procedure List all existing brokers: USD kn broker list Example output NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True 5.13.6.2. Describing an existing broker by using the Knative CLI Using the Knative ( kn ) CLI to describe brokers provides a streamlined and intuitive user interface. You can use the kn broker describe command to print information about existing brokers in your cluster by using the Knative CLI. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. Procedure Describe an existing broker: USD kn broker describe <broker_name> Example command using default broker USD kn broker describe default Example output Name: default Namespace: default Annotations: eventing.knative.dev/broker.class=MTChannelBasedBroker, eventing.knative.dev/creato ... Age: 22s Address: URL: http://broker-ingress.knative-eventing.svc.cluster.local/default/default Conditions: OK TYPE AGE REASON ++ Ready 22s ++ Addressable 22s ++ FilterReady 22s ++ IngressReady 22s ++ TriggerChannelReady 22s 5.13.7. steps Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. See Examples of configuring event delivery parameters . 5.13.8. Additional resources Configuring the default broker class Triggers Event sources Event delivery Kafka broker Configuring Knative Kafka 5.14. Triggers Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink. If you are using a Kafka broker, you can configure the delivery order of events from triggers to event sinks. See Configuring event delivery ordering for triggers . 5.14.1. Creating a trigger by using the web console Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a trigger. After Knative Eventing is installed on your cluster and you have created a broker, you can create a trigger by using the web console. Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created a broker and a Knative service or other event sink to connect to the trigger. Procedure In the Developer perspective, navigate to the Topology page. Hover over the broker that you want to create a trigger for, and drag the arrow. The Add Trigger option is displayed. Click Add Trigger . Select your sink in the Subscriber list. Click Add . Verification After the subscription has been created, you can view it in the Topology page, where it is represented as a line that connects the broker to the event sink. Deleting a trigger In the Developer perspective, navigate to the Topology page. Click on the trigger that you want to delete. In the Actions context menu, select Delete Trigger . 5.14.2. Creating a trigger by using the Knative CLI Using the Knative ( kn ) CLI to create triggers provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn trigger create command to create a trigger. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a trigger: USD kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name> Alternatively, you can create a trigger and simultaneously create the default broker using broker injection: USD kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name> By default, triggers forward all events sent to a broker to sinks that are subscribed to that broker. Using the --filter attribute for triggers allows you to filter events from a broker, so that subscribers will only receive a subset of events based on your defined criteria. 5.14.3. Listing triggers by using the Knative CLI Using the Knative ( kn ) CLI to list triggers provides a streamlined and intuitive user interface. You can use the kn trigger list command to list existing triggers in your cluster. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. Procedure Print a list of available triggers: USD kn trigger list Example output NAME BROKER SINK AGE CONDITIONS READY REASON email default ksvc:edisplay 4s 5 OK / 5 True ping default ksvc:edisplay 32s 5 OK / 5 True Optional: Print a list of triggers in JSON format: USD kn trigger list -o json 5.14.4. Describing a trigger by using the Knative CLI Using the Knative ( kn ) CLI to describe triggers provides a streamlined and intuitive user interface. You can use the kn trigger describe command to print information about existing triggers in your cluster by using the Knative CLI. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a trigger. Procedure Enter the command: USD kn trigger describe <trigger_name> Example output Name: ping Namespace: default Labels: eventing.knative.dev/broker=default Annotations: eventing.knative.dev/creator=kube:admin, eventing.knative.dev/lastModifier=kube:admin Age: 2m Broker: default Filter: type: dev.knative.event Sink: Name: edisplay Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m ++ BrokerReady 2m ++ DependencyReady 2m ++ Subscribed 2m ++ SubscriberResolved 2m 5.14.5. Filtering events with triggers by using the Knative CLI Using the Knative ( kn ) CLI to filter events by using triggers provides a streamlined and intuitive user interface. You can use the kn trigger create command, along with the appropriate flags, to filter events by using triggers. In the following trigger example, only events with the attribute type: dev.knative.samples.helloworld are sent to the event sink: USD kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name> You can also filter events by using multiple attributes. The following example shows how to filter events using the type, source, and extension attributes: USD kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> \ --filter type=dev.knative.samples.helloworld \ --filter source=dev.knative.samples/helloworldsource \ --filter myextension=my-extension-value 5.14.6. Updating a trigger by using the Knative CLI Using the Knative ( kn ) CLI to update triggers provides a streamlined and intuitive user interface. You can use the kn trigger update command with certain flags to update attributes for a trigger. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Update a trigger: USD kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags] You can update a trigger to filter exact event attributes that match incoming events. For example, using the type attribute: USD kn trigger update <trigger_name> --filter type=knative.dev.event You can remove a filter attribute from a trigger. For example, you can remove the filter attribute with key type : USD kn trigger update <trigger_name> --filter type- You can use the --sink parameter to change the event sink of a trigger: USD kn trigger update <trigger_name> --sink ksvc:my-event-sink 5.14.7. Deleting a trigger by using the Knative CLI Using the Knative ( kn ) CLI to delete a trigger provides a streamlined and intuitive user interface. You can use the kn trigger delete command to delete a trigger. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Delete a trigger: USD kn trigger delete <trigger_name> Verification List existing triggers: USD kn trigger list Verify that the trigger no longer exists: Example output No triggers found. 5.14.8. Configuring event delivery ordering for triggers If you are using a Kafka broker, you can configure the delivery order of events from triggers to event sinks. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and Knative Kafka are installed on your OpenShift Container Platform cluster. Kafka broker is enabled for use on your cluster, and you have created a Kafka broker. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift ( oc ) CLI. Procedure Create or modify a Trigger object and set the kafka.eventing.knative.dev/delivery.order annotation: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered ... The supported consumer delivery guarantees are: unordered An unordered consumer is a non-blocking consumer that delivers messages unordered, while preserving proper offset management. ordered An ordered consumer is a per-partition blocking consumer that waits for a successful response from the CloudEvent subscriber before it delivers the message of the partition. The default ordering guarantee is unordered . Apply the Trigger object: USD oc apply -f <filename> 5.14.9. steps Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. See Examples of configuring event delivery parameters . 5.15. Using Knative Kafka Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities. Knative Kafka functionality is available in an OpenShift Serverless installation if a cluster administrator has installed the KnativeKafka custom resource . Note Knative Kafka is not currently supported for IBM Z and IBM Power Systems. Knative Kafka provides additional options, such as: Kafka source Kafka channel Kafka broker (Technology Preview) Kafka sink (Technology Preview) 5.15.1. Kafka event delivery and retries Using Kafka components in an event-driven architecture provides "at least once" event delivery. This means that operations are retried until a return code value is received. This makes applications more resilient to lost events; however, it might result in duplicate events being sent. For the Kafka event source, there is a fixed number of retries for event delivery by default. For Kafka channels, retries are only performed if they are configured in the Kafka channel Delivery spec. See the Event delivery documentation for more information about delivery guarantees. 5.15.2. Kafka source You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink. You can create a Kafka source by using the OpenShift Container Platform web console, the Knative ( kn ) CLI, or by creating a KafkaSource object directly as a YAML file and using the OpenShift CLI ( oc ) to apply it. 5.15.2.1. Creating a Kafka event source by using the web console After Knative Kafka is installed on your cluster, you can create a Kafka source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a Kafka source. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your cluster. You have logged in to the web console. You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to the Add page and select Event Source . In the Event Sources page, select Kafka Source in the Type section. Configure the Kafka Source settings: Add a comma-separated list of Bootstrap Servers . Add a comma-separated list of Topics . Add a Consumer Group . Select the Service Account Name for the service account that you created. Select the Sink for the event source. A Sink can be either a Resource , such as a channel, broker, or service, or a URI . Enter a Name for the Kafka event source. Click Create . Verification You can verify that the Kafka event source was created and is connected to the sink by viewing the Topology page. In the Developer perspective, navigate to Topology . View the Kafka event source and sink. 5.15.2.2. Creating a Kafka event source by using the Knative CLI You can use the kn source kafka create command to create a Kafka source by using the Knative ( kn ) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. Prerequisites The OpenShift Serverless Operator, Knative Eventing, Knative Serving, and the KnativeKafka custom resource (CR) are installed on your cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. You have installed the Knative ( kn ) CLI. Optional: You have installed the OpenShift CLI ( oc ) if you want to use the verification steps in this procedure. Procedure To verify that the Kafka event source is working, create a Knative service that dumps incoming events into the service logs: USD kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display Create a KafkaSource CR: USD kn source kafka create <kafka_source_name> \ --servers <cluster_kafka_bootstrap>.kafka.svc:9092 \ --topics <topic_name> --consumergroup my-consumer-group \ --sink event-display Note Replace the placeholder values in this command with values for your source name, bootstrap servers, and topics. The --servers , --topics , and --consumergroup options specify the connection parameters to the Kafka cluster. The --consumergroup option is optional. Optional: View details about the KafkaSource CR you created: USD kn source kafka describe <kafka_source_name> Example output Name: example-kafka-source Namespace: kafka Age: 1h BootstrapServers: example-cluster-kafka-bootstrap.kafka.svc:9092 Topics: example-topic ConsumerGroup: example-consumer-group Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 1h ++ Deployed 1h ++ SinkProvided 1h Verification steps Trigger the Kafka instance to send a message to the topic: USD oc -n kafka run kafka-producer \ -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true \ --restart=Never -- bin/kafka-console-producer.sh \ --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic Enter the message in the prompt. This command assumes that: The Kafka cluster is installed in the kafka namespace. The KafkaSource object has been configured to use the my-topic topic. Verify that the message arrived by viewing the logs: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic subject: partition:46#0 id: partition:46/offset:0 time: 2021-03-10T11:21:49.4Z Extensions, traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00 Data, Hello! 5.15.2.2.1. Knative CLI sink flag When you create an event source by using the Knative ( kn ) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources. The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local , as the sink: Example command using the sink flag USD kn source binding create bind-heartbeat \ --namespace sinkbinding-example \ --subject "Job:batch/v1:app=heartbeat-cron" \ --sink http://event-display.svc.cluster.local \ 1 --ce-override "sink=bound" 1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel , and broker . 5.15.2.3. Creating a Kafka event source by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka source by using YAML, you must create a YAML file that defines a KafkaSource object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. Install the OpenShift CLI ( oc ). Procedure Create a KafkaSource object as a YAML file: apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: <source_name> spec: consumerGroup: <group_name> 1 bootstrapServers: - <list_of_bootstrap_servers> topics: - <list_of_topics> 2 sink: - <list_of_sinks> 3 1 A consumer group is a group of consumers that use the same group ID, and consume data from a topic. 2 A topic provides a destination for the storage of data. Each topic is split into one or more partitions. 3 A sink specifies where events are sent to from a source. Important Only the v1beta1 version of the API for KafkaSource objects on OpenShift Serverless is supported. Do not use the v1alpha1 version of this API, as this version is now deprecated. Example KafkaSource object apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: kafka-source spec: consumerGroup: knative-group bootstrapServers: - my-cluster-kafka-bootstrap.kafka:9092 topics: - knative-demo-topic sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display Apply the KafkaSource YAML file: USD oc apply -f <filename> Verification Verify that the Kafka event source was created by entering the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE kafkasource-kafka-source-5ca0248f-... 1/1 Running 0 13m 5.15.3. Kafka broker Important Kafka broker is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . For production-ready Knative Eventing deployments, Red Hat recommends using the Knative Kafka broker implementation. The Kafka broker is an Apache Kafka native implementation of the Knative broker, which sends CloudEvents directly to the Kafka instance. Important The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker. The Kafka broker has a native integration with Kafka for storing and routing events. This allows better integration with Kafka for the broker and trigger model over other broker types, and reduces network hops. Other benefits of the Kafka broker implementation include: At-least-once delivery guarantees Ordered delivery of events, based on the CloudEvents partitioning extension Control plane high availability A horizontally scalable data plane The Knative Kafka broker stores incoming CloudEvents as Kafka records, using the binary content mode. This means that all CloudEvent attributes and extensions are mapped as headers on the Kafka record, while the data spec of the CloudEvent corresponds to the value of the Kafka record. For information about using Kafka brokers, see Creating brokers . 5.15.4. Creating a Kafka channel by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe channels declaratively and in a reproducible manner. You can create a Knative Eventing channel that is backed by Kafka topics by creating a Kafka channel. To create a Kafka channel by using YAML, you must create a YAML file that defines a KafkaChannel object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a KafkaChannel object as a YAML file: apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel metadata: name: example-channel namespace: default spec: numPartitions: 3 replicationFactor: 1 Important Only the v1beta1 version of the API for KafkaChannel objects on OpenShift Serverless is supported. Do not use the v1alpha1 version of this API, as this version is now deprecated. Apply the KafkaChannel YAML file: USD oc apply -f <filename> 5.15.5. Kafka sink Kafka sinks are a type of event sink that are available if a cluster administrator has enabled Kafka on your cluster. You can send events directly from an event source to a Kafka topic by using a Kafka sink. Important Kafka sink is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 5.15.5.1. Using a Kafka sink You can create an event sink called a Kafka sink that sends events to a Kafka topic. Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka sink by using YAML, you must create a YAML file that defines a KafkaSink object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource (CR) are installed on your cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. Install the OpenShift CLI ( oc ). Procedure Create a KafkaSink object definition as a YAML file: Kafka sink YAML apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink metadata: name: <sink-name> namespace: <namespace> spec: topic: <topic-name> bootstrapServers: - <bootstrap-server> To create the Kafka sink, apply the KafkaSink YAML file: USD oc apply -f <filename> Configure an event source so that the sink is specified in its spec: Example of a Kafka sink connected to an API server source apiVersion: sources.knative.dev/v1alpha2 kind: ApiServerSource metadata: name: <source-name> 1 namespace: <namespace> 2 spec: serviceAccountName: <service-account-name> 3 mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <sink-name> 4 1 The name of the event source. 2 The namespace of the event source. 3 The service account for the event source. 4 The Kafka sink name. 5.15.6. Additional resources Red Hat AMQ Streams documentation Red Hat AMQ Streams TLS and SASL on Kafka documentation Event delivery Knative Kafka cluster administrator documentation
|
[
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: \"Hello Serverless!\"",
"kn service create <service-name> --image <image> --tag <tag-value>",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"Creating service 'event-display' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration \"event-display\" is waiting for a Revision to become ready. 3.857s 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL: http://event-display-default.apps-crc.testing",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --target ./ --namespace test",
"Service 'event-display' created in namespace 'test'.",
"tree ./",
"./ └── test └── ksvc └── event-display.yaml 2 directories, 1 file",
"cat test/ksvc/event-display.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: event-display namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest name: \"\" resources: {} status: {}",
"kn service describe event-display --target ./ --namespace test",
"Name: event-display Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON",
"kn service create -f test/ksvc/event-display.yaml",
"Creating service 'event-display' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s 0.168s Configuration \"event-display\" is waiting for a Revision to become ready. 23.377s 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'event-display' created to latest revision 'event-display-00001' is available at URL: http://event-display-test.apps.example.com",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-delivery namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest env: - name: RESPONSE value: \"Hello Serverless!\"",
"oc apply -f <filename>",
"oc get ksvc <service_name>",
"NAME URL LATESTCREATED LATESTREADY READY REASON event-delivery http://event-delivery-default.example.com event-delivery-4wsd2 event-delivery-4wsd2 True",
"curl http://event-delivery-default.example.com",
"curl https://event-delivery-default.example.com",
"Hello Serverless!",
"curl https://event-delivery-default.example.com --insecure",
"Hello Serverless!",
"curl https://event-delivery-default.example.com --cacert <file>",
"Hello Serverless!",
"spec: ingress: kourier: service-type: LoadBalancer",
"oc -n knative-serving-ingress get svc kourier",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kourier LoadBalancer 172.30.51.103 a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com 80:31380/TCP,443:31390/TCP 67m",
"curl -H \"Host: hello-default.example.com\" a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com",
"Hello Serverless!",
"grpc.Dial( \"a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com:80\", grpc.WithAuthority(\"hello-default.example.com:80\"), grpc.WithInsecure(), )",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: example-namespace spec: podSelector: ingress: []",
"oc label namespace knative-serving knative.openshift.io/system-namespace=true",
"oc label namespace knative-serving-ingress knative.openshift.io/system-namespace=true",
"oc label namespace knative-eventing knative.openshift.io/system-namespace=true",
"oc label namespace knative-kafka knative.openshift.io/system-namespace=true",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: <network_policy_name> 1 namespace: <namespace> 2 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/system-namespace: \"true\" podSelector: {} policyTypes: - Ingress",
"apiVersion: serving.knative.dev/v1 kind: Service spec: template: spec: initContainers: - imagePullPolicy: IfNotPresent 1 image: <image_uri> 2 volumeMounts: 3 - name: data mountPath: /data",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example namespace: default annotations: networking.knative.dev/http-option: \"redirected\" spec:",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/min-scale: \"0\"",
"kn service create <service_name> --image <image_uri> --scale-min <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-min 2",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/max-scale: \"10\"",
"kn service create <service_name> --image <image_uri> --scale-max <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-max 10",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target: \"200\"",
"kn service create <service_name> --image <image_uri> --concurrency-target <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-target 50",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: spec: containerConcurrency: 50",
"kn service create <service_name> --image <image_uri> --concurrency-limit <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-limit 50",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target-utilization-percentage: \"70\"",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - latestRevision: true percent: 100 status: traffic: - percent: 100 revisionName: example-service",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service percent: 100 - tag: latest latestRevision: true percent: 0",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service-1 percent: 50 - tag: candidate revisionName: example-service-2 percent: 50 - tag: latest latestRevision: true percent: 0",
"kn service update <service_name> --tag @latest=example-tag",
"kn service update <service_name> --untag example-tag",
"kn service update <service_name> --traffic <revision>=<percentage>",
"kn service update example-service --traffic @latest=20,stable=80",
"kn service update example-service --traffic @latest=10,stable=60",
"oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'",
"oc get ksvc example-service -o=jsonpath='{.status.latestCreatedRevisionName}'",
"example-service-00001",
"spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic goes to this revision",
"oc get ksvc <service_name>",
"oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'",
"spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic is still being routed to the first revision - revisionName: <second_revision_name> percent: 0 # No traffic is routed to the second revision tag: v2 # A named route",
"oc get ksvc <service_name> --output jsonpath=\"{.status.traffic[*].url}\"",
"spec: traffic: - revisionName: <first_revision_name> percent: 50 - revisionName: <second_revision_name> percent: 50 tag: v2",
"spec: traffic: - revisionName: <first_revision_name> percent: 0 - revisionName: <second_revision_name> percent: 100 tag: v2",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> labels: <label_name>: <label_value> annotations: <annotation_name>: <annotation_value>",
"kn service create <service_name> --image=<image> --annotation <annotation_name>=<annotation_value> --label <label_value>=<label_value>",
"oc get routes.route.openshift.io -l serving.knative.openshift.io/ingressName=<service_name> \\ 1 -l serving.knative.openshift.io/ingressNamespace=<service_namespace> \\ 2 -n knative-serving-ingress -o yaml | grep -e \"<label_name>: \\\"<label_value>\\\"\" -e \"<annotation_name>: <annotation_value>\" 3",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> annotations: serving.knative.openshift.io/disableRoute: \"true\" spec: template: spec: containers: - image: <image>",
"oc apply -f <filename>",
"kn service create <service_name> --image=gcr.io/knative-samples/helloworld-go --annotation serving.knative.openshift.io/disableRoute=true",
"USD oc get routes.route.openshift.io -l serving.knative.openshift.io/ingressName=USDKSERVICE_NAME -l serving.knative.openshift.io/ingressNamespace=USDKSERVICE_NAMESPACE -n knative-serving-ingress",
"No resources found in knative-serving-ingress namespace.",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 600s 1 name: <route_name> 2 namespace: knative-serving-ingress 3 spec: host: <service_host> 4 port: targetPort: http2 to: kind: Service name: kourier weight: 100 tls: insecureEdgeTerminationPolicy: Allow termination: edge 5 key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE---- wildcardPolicy: None",
"oc apply -f <filename>",
"oc label ksvc <service_name> networking.knative.dev/visibility=cluster-local",
"oc get ksvc",
"NAME URL LATESTCREATED LATESTREADY READY REASON hello http://hello.default.svc.cluster.local hello-tx2g7 hello-tx2g7 True",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> 1 spec: subscriber: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <kafka_sink_name> 2",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: spec: delivery: deadLetterSink: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: spec: broker: <broker_name> delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: messaging.knative.dev/v1 kind: Channel metadata: spec: delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: messaging.knative.dev/v1 kind: Subscription metadata: spec: channel: apiVersion: messaging.knative.dev/v1 kind: Channel name: <channel_name> delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered",
"oc apply -f <filename>",
"kn source list-types",
"TYPE NAME DESCRIPTION ApiServerSource apiserversources.sources.knative.dev Watch and send Kubernetes API events to a sink PingSource pingsources.sources.knative.dev Periodically send ping events to a sink SinkBinding sinkbindings.sources.knative.dev Binding for connecting a PodSpecable to a sink",
"kn source list-types -o yaml",
"kn source list",
"NAME TYPE RESOURCE SINK READY a1 ApiServerSource apiserversources.sources.knative.dev ksvc:eshow2 True b1 SinkBinding sinkbindings.sources.knative.dev ksvc:eshow3 False p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True",
"kn source list --type <event_source_type>",
"kn source list --type PingSource",
"NAME TYPE RESOURCE SINK READY p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True",
"apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4",
"oc apply -f <filename>",
"apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4",
"oc apply -f <filename>",
"kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource \"event:v1\" --service-account <service_account_name> --mode Resource",
"kn service create <service_name> --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn trigger create <trigger_name> --sink ksvc:<service_name>",
"oc create deployment hello-node --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source apiserver describe <source_name>",
"Name: mysource Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 3m ServiceAccountName: events-sa Mode: Resource Sink: Name: default Namespace: default Kind: Broker (eventing.knative.dev/v1) Resources: Kind: event (v1) Controller: false Conditions: OK TYPE AGE REASON ++ Ready 3m ++ Deployed 3m ++ SinkProvided 3m ++ SufficientPermissions 3m ++ EventTypesProvided 3m",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json Data, { \"apiVersion\": \"v1\", \"involvedObject\": { \"apiVersion\": \"v1\", \"fieldPath\": \"spec.containers{hello-node}\", \"kind\": \"Pod\", \"name\": \"hello-node\", \"namespace\": \"default\", .. }, \"kind\": \"Event\", \"message\": \"Started container\", \"metadata\": { \"name\": \"hello-node.159d7608e3a3572c\", \"namespace\": \"default\", . }, \"reason\": \"Started\", }",
"kn trigger delete <trigger_name>",
"kn source apiserver delete <source_name>",
"oc delete -f authentication.yaml",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: name: testevents spec: serviceAccountName: events-sa mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default",
"oc apply -f <filename>",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: event-display-trigger namespace: default spec: broker: default subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"oc create deployment hello-node --image=quay.io/openshift-knative/knative-eventing-sources-event-display",
"oc get apiserversource.sources.knative.dev testevents -o yaml",
"apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: annotations: creationTimestamp: \"2020-04-07T17:24:54Z\" generation: 1 name: testevents namespace: default resourceVersion: \"62868\" selfLink: /apis/sources.knative.dev/v1alpha1/namespaces/default/apiserversources/testevents2 uid: 1603d863-bb06-4d1c-b371-f580b4db99fa spec: mode: Resource resources: - apiVersion: v1 controller: false controllerSelector: apiVersion: \"\" kind: \"\" name: \"\" uid: \"\" kind: Event labelSelector: {} serviceAccountName: events-sa sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json Data, { \"apiVersion\": \"v1\", \"involvedObject\": { \"apiVersion\": \"v1\", \"fieldPath\": \"spec.containers{hello-node}\", \"kind\": \"Pod\", \"name\": \"hello-node\", \"namespace\": \"default\", .. }, \"kind\": \"Event\", \"message\": \"Started container\", \"metadata\": { \"name\": \"hello-node.159d7608e3a3572c\", \"namespace\": \"default\", . }, \"reason\": \"Started\", }",
"oc delete -f trigger.yaml",
"oc delete -f k8s-events.yaml",
"oc delete -f authentication.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source ping create test-ping-source --schedule \"*/2 * * * *\" --data '{\"message\": \"Hello world!\"}' --sink ksvc:event-display",
"kn source ping describe test-ping-source",
"Name: test-ping-source Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 15s Schedule: */2 * * * * Data: {\"message\": \"Hello world!\"} Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 8s ++ Deployed 8s ++ SinkProvided 15s ++ ValidSchedule 15s ++ EventTypeProvided 15s ++ ResourcesCorrect 15s",
"watch oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9 time: 2020-04-07T16:16:00.000601161Z datacontenttype: application/json Data, { \"message\": \"Hello world!\" }",
"kn delete pingsources.sources.knative.dev <ping_source_name>",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: \"*/2 * * * *\" 1 data: '{\"message\": \"Hello world!\"}' 2 sink: 3 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: \"*/2 * * * *\" data: '{\"message\": \"Hello world!\"}' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"oc get pingsource.sources.knative.dev <ping_source_name> -oyaml",
"apiVersion: sources.knative.dev/v1 kind: PingSource metadata: annotations: sources.knative.dev/creator: developer sources.knative.dev/lastModifier: developer creationTimestamp: \"2020-04-07T16:11:14Z\" generation: 1 name: test-ping-source namespace: default resourceVersion: \"55257\" selfLink: /apis/sources.knative.dev/v1/namespaces/default/pingsources/test-ping-source uid: 3d80d50b-f8c7-4c1b-99f7-3ec00e0a8164 spec: data: '{ value: \"hello\" }' schedule: '*/2 * * * *' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default",
"watch oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 042ff529-240e-45ee-b40c-3a908129853e time: 2020-04-07T16:22:00.000791674Z datacontenttype: application/json Data, { \"message\": \"Hello world!\" }",
"oc delete -f <filename>",
"oc delete -f ping-source.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1alpha1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job 1 selector: matchLabels: app: heartbeat-cron sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"apiVersion: batch/v1beta1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"* * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\"",
"oc apply -f <filename>",
"oc get sinkbindings.sources.knative.dev bind-heartbeat -oyaml",
"spec: sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: app: heartbeat-cron",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source binding create bind-heartbeat --subject Job:batch/v1:app=heartbeat-cron --sink ksvc:event-display",
"apiVersion: batch/v1beta1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"* * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\"",
"oc apply -f <filename>",
"kn source binding describe bind-heartbeat",
"Name: bind-heartbeat Namespace: demo-2 Annotations: sources.knative.dev/creator=minikube-user, sources.knative.dev/lastModifier=minikub Age: 2m Subject: Resource: job (batch/v1) Selector: app: heartbeat-cron Sink: Name: event-display Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"*/1 * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: true 1 spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: apps/v1 kind: Deployment namespace: default name: mysubject",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: working: example",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: v1 kind: Pod namespace: default selector: - matchExpression: key: working operator: In values: - example - sample",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: ceOverrides: extensions: extra: this is an extra attribute additional: 42",
"{ \"extensions\": { \"extra\": \"this is an extra attribute\", \"additional\": \"42\" } }",
"oc label namespace <namespace> bindings.knative.dev/include=true",
"package main import ( \"context\" \"encoding/json\" \"flag\" \"fmt\" \"log\" \"os\" \"strconv\" \"time\" duckv1 \"knative.dev/pkg/apis/duck/v1\" cloudevents \"github.com/cloudevents/sdk-go/v2\" \"github.com/kelseyhightower/envconfig\" ) type Heartbeat struct { Sequence int `json:\"id\"` Label string `json:\"label\"` } var ( eventSource string eventType string sink string label string periodStr string ) func init() { flag.StringVar(&eventSource, \"eventSource\", \"\", \"the event-source (CloudEvents)\") flag.StringVar(&eventType, \"eventType\", \"dev.knative.eventing.samples.heartbeat\", \"the event-type (CloudEvents)\") flag.StringVar(&sink, \"sink\", \"\", \"the host url to heartbeat to\") flag.StringVar(&label, \"label\", \"\", \"a special label\") flag.StringVar(&periodStr, \"period\", \"5\", \"the number of seconds between heartbeats\") } type envConfig struct { // Sink URL where to send heartbeat cloud events Sink string `envconfig:\"K_SINK\"` // CEOverrides are the CloudEvents overrides to be applied to the outbound event. CEOverrides string `envconfig:\"K_CE_OVERRIDES\"` // Name of this pod. Name string `envconfig:\"POD_NAME\" required:\"true\"` // Namespace this pod exists in. Namespace string `envconfig:\"POD_NAMESPACE\" required:\"true\"` // Whether to run continuously or exit. OneShot bool `envconfig:\"ONE_SHOT\" default:\"false\"` } func main() { flag.Parse() var env envConfig if err := envconfig.Process(\"\", &env); err != nil { log.Printf(\"[ERROR] Failed to process env var: %s\", err) os.Exit(1) } if env.Sink != \"\" { sink = env.Sink } var ceOverrides *duckv1.CloudEventOverrides if len(env.CEOverrides) > 0 { overrides := duckv1.CloudEventOverrides{} err := json.Unmarshal([]byte(env.CEOverrides), &overrides) if err != nil { log.Printf(\"[ERROR] Unparseable CloudEvents overrides %s: %v\", env.CEOverrides, err) os.Exit(1) } ceOverrides = &overrides } p, err := cloudevents.NewHTTP(cloudevents.WithTarget(sink)) if err != nil { log.Fatalf(\"failed to create http protocol: %s\", err.Error()) } c, err := cloudevents.NewClient(p, cloudevents.WithUUIDs(), cloudevents.WithTimeNow()) if err != nil { log.Fatalf(\"failed to create client: %s\", err.Error()) } var period time.Duration if p, err := strconv.Atoi(periodStr); err != nil { period = time.Duration(5) * time.Second } else { period = time.Duration(p) * time.Second } if eventSource == \"\" { eventSource = fmt.Sprintf(\"https://knative.dev/eventing-contrib/cmd/heartbeats/#%s/%s\", env.Namespace, env.Name) log.Printf(\"Heartbeats Source: %s\", eventSource) } if len(label) > 0 && label[0] == '\"' { label, _ = strconv.Unquote(label) } hb := &Heartbeat{ Sequence: 0, Label: label, } ticker := time.NewTicker(period) for { hb.Sequence++ event := cloudevents.NewEvent(\"1.0\") event.SetType(eventType) event.SetSource(eventSource) event.SetExtension(\"the\", 42) event.SetExtension(\"heart\", \"yes\") event.SetExtension(\"beats\", true) if ceOverrides != nil && ceOverrides.Extensions != nil { for n, v := range ceOverrides.Extensions { event.SetExtension(n, v) } } if err := event.SetData(cloudevents.ApplicationJSON, hb); err != nil { log.Printf(\"failed to set cloudevents data: %s\", err.Error()) } log.Printf(\"sending cloudevent to %s\", sink) if res := c.Send(context.Background(), event); !cloudevents.IsACK(res) { log.Printf(\"failed to send cloudevent: %v\", res) } if env.OneShot { return } // Wait for next tick <-ticker.C } }",
"apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: # This corresponds to a heartbeats image URI that you have built and published - image: gcr.io/knative-releases/knative.dev/eventing/cmd/heartbeats name: heartbeats args: - --period=1 env: - name: POD_NAME value: \"example-pod\" - name: POD_NAMESPACE value: \"event-test\" sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: example-service",
"kn source container create <container_source_name> --image <image_uri> --sink <sink>",
"kn source container delete <container_source_name>",
"kn source container describe <container_source_name>",
"kn source container list",
"kn source container list -o yaml",
"kn source container update <container_source_name> --image <image_uri>",
"apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: - image: quay.io/openshift-knative/heartbeats:latest name: heartbeats args: - --period=1 env: - name: POD_NAME value: \"mypod\" - name: POD_NAMESPACE value: \"event-test\"",
"apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: ceOverrides: extensions: extra: this is an extra attribute additional: 42",
"{ \"extensions\": { \"extra\": \"this is an extra attribute\", \"additional\": \"42\" } }",
"kn channel create <channel_name> --type <channel_type>",
"kn channel create mychannel --type messaging.knative.dev:v1:InMemoryChannel",
"Channel 'mychannel' created in namespace 'default'.",
"kn channel list",
"kn channel list NAME TYPE URL AGE READY REASON mychannel InMemoryChannel http://mychannel-kn-channel.default.svc.cluster.local 93s True",
"kn channel delete <channel_name>",
"apiVersion: messaging.knative.dev/v1 kind: Channel metadata: name: example-channel namespace: default",
"oc apply -f <filename>",
"apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel metadata: name: example-channel namespace: default spec: numPartitions: 3 replicationFactor: 1",
"oc apply -f <filename>",
"apiVersion: messaging.knative.dev/v1beta1 kind: Subscription metadata: name: my-subscription 1 namespace: default spec: channel: 2 apiVersion: messaging.knative.dev/v1beta1 kind: Channel name: example-channel delivery: 3 deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: error-handler subscriber: 4 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"kn subscription create <subscription_name> --channel <group:version:kind>:<channel_name> \\ 1 --sink <sink_prefix>:<sink_name> \\ 2 --sink-dead-letter <sink_prefix>:<sink_name> 3",
"kn subscription create mysubscription --channel mychannel --sink ksvc:event-display",
"Subscription 'mysubscription' created in namespace 'default'.",
"kn subscription list",
"NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True",
"kn subscription delete <subscription_name>",
"kn subscription describe <subscription_name>",
"Name: my-subscription Namespace: default Annotations: messaging.knative.dev/creator=openshift-user, messaging.knative.dev/lastModifier=min Age: 43s Channel: Channel:my-channel (messaging.knative.dev/v1) Subscriber: URI: http://edisplay.default.example.com Reply: Name: default Resource: Broker (eventing.knative.dev/v1) DeadLetterSink: Name: my-sink Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 43s ++ AddedToChannel 43s ++ ChannelReady 43s ++ ReferencesResolved 43s",
"kn subscription list",
"NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True",
"kn subscription update <subscription_name> --sink <sink_prefix>:<sink_name> \\ 1 --sink-dead-letter <sink_prefix>:<sink_name> 2",
"kn subscription update mysubscription --sink ksvc:event-display",
"kn broker create <broker_name>",
"kn broker list",
"NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: annotations: eventing.knative.dev/injection: enabled name: <trigger_name> spec: broker: default subscriber: 1 ref: apiVersion: serving.knative.dev/v1 kind: Service name: <service_name>",
"oc apply -f <filename>",
"oc -n <namespace> get broker default",
"NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s",
"oc label namespace <namespace> eventing.knative.dev/injection=enabled",
"oc -n <namespace> get broker <broker_name>",
"oc -n default get broker default",
"NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s",
"oc label namespace <namespace> eventing.knative.dev/injection-",
"oc -n <namespace> delete broker <broker_name>",
"oc -n <namespace> get broker <broker_name>",
"oc -n default get broker default",
"No resources found. Error from server (NotFound): brokers.eventing.knative.dev \"default\" not found",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 name: example-kafka-broker spec: config: apiVersion: v1 kind: ConfigMap name: kafka-broker-config 2 namespace: knative-eventing",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 kafka.eventing.knative.dev/external.topic: <topic_name> 2",
"oc apply -f <filename>",
"kn broker list",
"NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True",
"kn broker describe <broker_name>",
"kn broker describe default",
"Name: default Namespace: default Annotations: eventing.knative.dev/broker.class=MTChannelBasedBroker, eventing.knative.dev/creato Age: 22s Address: URL: http://broker-ingress.knative-eventing.svc.cluster.local/default/default Conditions: OK TYPE AGE REASON ++ Ready 22s ++ Addressable 22s ++ FilterReady 22s ++ IngressReady 22s ++ TriggerChannelReady 22s",
"kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name>",
"kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name>",
"kn trigger list",
"NAME BROKER SINK AGE CONDITIONS READY REASON email default ksvc:edisplay 4s 5 OK / 5 True ping default ksvc:edisplay 32s 5 OK / 5 True",
"kn trigger list -o json",
"kn trigger describe <trigger_name>",
"Name: ping Namespace: default Labels: eventing.knative.dev/broker=default Annotations: eventing.knative.dev/creator=kube:admin, eventing.knative.dev/lastModifier=kube:admin Age: 2m Broker: default Filter: type: dev.knative.event Sink: Name: edisplay Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m ++ BrokerReady 2m ++ DependencyReady 2m ++ Subscribed 2m ++ SubscriberResolved 2m",
"kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name>",
"kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> --filter type=dev.knative.samples.helloworld --filter source=dev.knative.samples/helloworldsource --filter myextension=my-extension-value",
"kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags]",
"kn trigger update <trigger_name> --filter type=knative.dev.event",
"kn trigger update <trigger_name> --filter type-",
"kn trigger update <trigger_name> --sink ksvc:my-event-sink",
"kn trigger delete <trigger_name>",
"kn trigger list",
"No triggers found.",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered",
"oc apply -f <filename>",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display",
"kn source kafka create <kafka_source_name> --servers <cluster_kafka_bootstrap>.kafka.svc:9092 --topics <topic_name> --consumergroup my-consumer-group --sink event-display",
"kn source kafka describe <kafka_source_name>",
"Name: example-kafka-source Namespace: kafka Age: 1h BootstrapServers: example-cluster-kafka-bootstrap.kafka.svc:9092 Topics: example-topic ConsumerGroup: example-consumer-group Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 1h ++ Deployed 1h ++ SinkProvided 1h",
"oc -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic subject: partition:46#0 id: partition:46/offset:0 time: 2021-03-10T11:21:49.4Z Extensions, traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00 Data, Hello!",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: <source_name> spec: consumerGroup: <group_name> 1 bootstrapServers: - <list_of_bootstrap_servers> topics: - <list_of_topics> 2 sink: - <list_of_sinks> 3",
"apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: kafka-source spec: consumerGroup: knative-group bootstrapServers: - my-cluster-kafka-bootstrap.kafka:9092 topics: - knative-demo-topic sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE kafkasource-kafka-source-5ca0248f-... 1/1 Running 0 13m",
"apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel metadata: name: example-channel namespace: default spec: numPartitions: 3 replicationFactor: 1",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink metadata: name: <sink-name> namespace: <namespace> spec: topic: <topic-name> bootstrapServers: - <bootstrap-server>",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1alpha2 kind: ApiServerSource metadata: name: <source-name> 1 namespace: <namespace> 2 spec: serviceAccountName: <service-account-name> 3 mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <sink-name> 4"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/serverless/develop
|
21.4.3. Related Books
|
21.4.3. Related Books Managing NFS and NIS Services by Hal Stern; O'Reilly &Associates, Inc.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Network_File_System_NFS-Additional_Resources-Related_Books
|
Chapter 7. Global File System 2
|
Chapter 7. Global File System 2 The Red Hat Global File System 2 (GFS2) is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer). When implemented as a cluster file system, GFS2 employs distributed metadata and multiple journals. GFS2 is based on 64-bit architecture, which can theoretically accommodate an 8 exabyte file system. However, the current supported maximum size of a GFS2 file system is 100 TB. If a system requires GFS2 file systems larger than 100 TB, contact your Red Hat service representative. When determining the size of a file system, consider its recovery needs. Running the fsck command on a very large file system can take a long time and consume a large amount of memory. Additionally, in the event of a disk or disk-subsystem failure, recovery time is limited by the speed of backup media. When configured in a Red Hat Cluster Suite, Red Hat GFS2 nodes can be configured and managed with Red Hat Cluster Suite configuration and management tools. Red Hat GFS2 then provides data sharing among GFS2 nodes in a Red Hat cluster, with a single, consistent view of the file system name space across the GFS2 nodes. This allows processes on different nodes to share GFS2 files in the same way that processes on the same node can share files on a local file system, with no discernible difference. For information about the Red Hat Cluster Suite, refer to Red Hat's Cluster Administration guide. A GFS2 must be built on a logical volume (created with LVM) that is a linear or mirrored volume. Logical volumes created with LVM in a Red Hat Cluster suite are managed with CLVM (a cluster-wide implementation of LVM), enabled by the CLVM daemon clvmd , and running in a Red Hat Cluster Suite cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. For information on the Logical Volume Manager, see Red Hat's Logical Volume Manager Administration guide. The gfs2.ko kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes. For comprehensive information on the creation and configuration of GFS2 file systems in clustered and non-clustered storage, refer to Red Hat's Global File System 2 guide.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch-gfs2
|
Providing feedback on Red Hat build of Quarkus documentation
|
Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/developing_and_compiling_your_red_hat_build_of_quarkus_applications_with_apache_maven/proc_providing-feedback-on-red-hat-documentation_quarkus-maven
|
Using the AMQ Streams Kafka Bridge
|
Using the AMQ Streams Kafka Bridge Red Hat AMQ Streams 2.1 Use the AMQ Streams Kafka Bridge to connect with a Kafka cluster
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_the_amq_streams_kafka_bridge/index
|
Chapter 14. Red Hat Quay quota management and enforcement overview
|
Chapter 14. Red Hat Quay quota management and enforcement overview With Red Hat Quay, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits. On-premise Red Hat Quay users are now equipped with the following capabilities to manage the capacity limits of their environment: Quota reporting: With this feature, a superuser can track the storage consumption of all their organizations. Additionally, users can track the storage consumption of their assigned organization. Quota management: With this feature, a superuser can define soft and hard checks for Red Hat Quay users. Soft checks tell users if the storage consumption of an organization reaches their configured threshold. Hard checks prevent users from pushing to the registry when storage consumption reaches the configured limit. Together, these features allow service owners of a Red Hat Quay registry to define service level agreements and support a healthy resource budget. 14.1. Quota management limitations Quota management helps organizations to maintain resource consumption. One limitation of quota management is that calculating resource consumption on push results in the calculation becoming part of the push's critical path. Without this, usage data might drift. The maximum storage quota size is dependent on the selected database: Table 14.1. Worker count environment variables Variable Description Postgres 8388608 TB MySQL 8388608 TB SQL Server 16777216 TB 14.2. Quota management for Red Hat Quay 3.9 If you are upgrading to Red Hat Quay 3.9, you must reconfigure the quota management feature. This is because with Red Hat Quay 3.9, calculation is done differently. As a result, totals prior to Red Hat Quay 3.9 are no longer valid. There are two methods for configuring quota management in Red Hat Quay 3.9, which are detailed in the following sections. Note This is a one time calculation that must be done after you have upgraded to Red Hat Quay 3.9. Superuser privileges are required to create, update and delete quotas. While quotas can be set for users as well as organizations, you cannot reconfigure the user quota using the Red Hat Quay UI and you must use the API instead. 14.2.1. Option A: Configuring quota management for Red Hat Quay 3.9 by adjusting the QUOTA_TOTAL_DELAY feature flag Use the following procedure to recalculate Red Hat Quay 3.9 quota management by adjusting the QUOTA_TOTAL_DELAY feature flag. Note With this recalculation option, the totals appear as 0.00 KB until the allotted time designated for QUOTA_TOTAL_DELAY . Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged into Red Hat Quay 3.9 as a superuser. Procedure Deploy Red Hat Quay 3.9 with the following config.yaml settings: FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 1 RESET_CHILD_MANIFEST_EXPIRATION: true 1 The QUOTA_TOTAL_DELAY_SECONDS flag defaults to 1800 seconds, or 30 minutes. This allows Red Hat Quay 3.9 to successfully deploy before the quota management feature begins calculating storage consumption for every blob that has been pushed. Setting this flag to a lower number might result in miscalculation; it must be set to a number that is greater than the time it takes your Red Hat Quay deployment to start. 1800 is the recommended setting, however larger deployments that take longer than 30 minutes to start might require a longer duration than 1800 . Navigate to the Red Hat Quay UI and click the name of your Organization. The Total Quota Consumed should read 0.00 KB . Additionally, the Backfill Queued indicator should be present. After the allotted time, for example, 30 minutes, refresh your Red Hat Quay deployment page and return to your Organization. Now, the Total Quota Consumed should be present. 14.2.2. Option B: Configuring quota management for Red Hat Quay 3.9 by setting QUOTA_TOTAL_DELAY_SECONDS to 0 Use the following procedure to recalculate Red Hat Quay 3.9 quota management by setting QUOTA_TOTAL_DELAY_SECONDS to 0 . Note Using this option prevents the possibility of miscalculations, however is more time intensive. Use the following procedure for when your Red Hat Quay deployment swaps the FEATURE_QUOTA_MANAGEMENT parameter from false to true . Most users will find xref: Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged into Red Hat Quay 3.9 as a superuser. Procedure Deploy Red Hat Quay 3.9 with the following config.yaml settings: FEATURE_GARBAGE_COLLECTION: true FEATURE_QUOTA_MANAGEMENT: true QUOTA_BACKFILL: false QUOTA_TOTAL_DELAY_SECONDS: 0 PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true Navigate to the Red Hat Quay UI and click the name of your Organization. The Total Quota Consumed should read 0.00 KB . Redeploy Red Hat Quay and set the QUOTA_BACKFILL flag set to true . For example: QUOTA_BACKFILL: true Note If you choose to disable quota management after it has calculated totals, Red Hat Quay marks those totals as stale. If you re-enable the quota management feature again in the future, those namespaces and repositories are recalculated by the backfill worker. 14.3. Testing quota management for Red Hat Quay 3.9 With quota management configured for Red Hat Quay 3.9, duplicative images are now only counted once towards the repository total. Use the following procedure to test that a duplicative image is only counted once toward the repository total. Prerequisites You have configured quota management for Red Hat Quay 3.9. Procedure Pull a sample image, for example, ubuntu:18.04 , by entering the following command: USD podman pull ubuntu:18.04 Tag the same image twice by entering the following command: USD podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag1 USD podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag2 Push the sample image to your organization by entering the following commands: USD podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag1 USD podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag2 On the Red Hat Quay UI, navigate to Organization and click the Repository Name , for example, quota-test/ubuntu . Then, click Tags . There should be two repository tags, tag1 and tag2 , each with the same manifest. For example: However, by clicking on the Organization link, we can see that the Total Quota Consumed is 27.94 MB , meaning that the Ubuntu image has only been accounted for once: If you delete one of the Ubuntu tags, the Total Quota Consumed remains the same. Note If you have configured the Red Hat Quay time machine to be longer than 0 seconds, subtraction will not happen until those tags pass the time machine window. If you want to expedite permanent deletion, see Permanently deleting an image tag in Red Hat Quay 3.9. 14.4. Setting default quota To specify a system-wide default storage quota that is applied to every organization and user, you can use the DEFAULT_SYSTEM_REJECT_QUOTA_BYTES configuration flag. If you configure a specific quota for an organization or user, and then delete that quota, the system-wide default quota will apply if one has been set. Similarly, if you have configured a specific quota for an organization or user, and then modify the system-wide default quota, the updated system-wide default will override any specific settings. For more information about the DEFAULT_SYSTEM_REJECT_QUOTA_BYTES flag, see link: 14.5. Establishing quota in Red Hat Quay UI The following procedure describes how you can report storage consumption and establish storage quota limits. Prerequisites A Red Hat Quay registry. A superuser account. Enough storage to meet the demands of quota limitations. Procedure Create a new organization or choose an existing one. Initially, no quota is configured, as can be seen on the Organization Settings tab: Log in to the registry as a superuser and navigate to the Manage Organizations tab on the Super User Admin Panel . Click the Options icon of the organization for which you want to create storage quota limits: Click Configure Quota and enter the initial quota, for example, 10 MB . Then click Apply and Close : Check that the quota consumed shows 0 of 10 MB on the Manage Organizations tab of the superuser panel: The consumed quota information is also available directly on the Organization page: Initial consumed quota To increase the quota to 100MB, navigate to the Manage Organizations tab on the superuser panel. Click the Options icon and select Configure Quota , setting the quota to 100 MB. Click Apply and then Close : Pull a sample image by entering the following command: USD podman pull ubuntu:18.04 Tag the sample image by entering the following command: USD podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 Push the sample image to the organization by entering the following command: USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 On the superuser panel, the quota consumed per organization is displayed: The Organization page shows the total proportion of the quota used by the image: Total Quota Consumed for first image Pull a second sample image by entering the following command: USD podman pull nginx Tag the second image by entering the following command: USD podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx Push the second image to the organization by entering the following command: USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx The Organization page shows the total proportion of the quota used by each repository in that organization: Total Quota Consumed for each repository Create reject and warning limits: From the superuser panel, navigate to the Manage Organizations tab. Click the Options icon for the organization and select Configure Quota . In the Quota Policy section, with the Action type set to Reject , set the Quota Threshold to 80 and click Add Limit : To create a warning limit, select Warning as the Action type, set the Quota Threshold to 70 and click Add Limit : Click Close on the quota popup. The limits are viewable, but not editable, on the Settings tab of the Organization page: Push an image where the reject limit is exceeded: Because the reject limit (80%) has been set to below the current repository size (~83%), the pushed image is rejected automatically. Sample image push USD podman pull ubuntu:20.04 USD podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 Sample output when quota exceeded Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace When limits are exceeded, notifications are displayed in the UI: Quota notifications 14.6. Establishing quota with the Red Hat Quay API When an organization is first created, it does not have a quota applied. Use the /api/v1/organization/{organization}/quota endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Sample output [] 14.6.1. Setting the quota To set a quota for an organization, POST data to the /api/v1/organization/{orgname}/quota endpoint: .Sample command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"limit_bytes": 10485760}' https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg/quota | jq Sample output "Created" 14.6.2. Viewing the quota To see the applied quota, GET data from the /api/v1/organization/{orgname}/quota endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Sample output [ { "id": 1, "limit_bytes": 10485760, "default_config": false, "limits": [], "default_config_exists": false } ] 14.6.3. Modifying the quota To change the existing quota, in this instance from 10 MB to 100 MB, PUT data to the /api/v1/organization/{orgname}/quota/{quota_id} endpoint: Sample command USD curl -k -X PUT -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"limit_bytes": 104857600}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1 | jq Sample output { "id": 1, "limit_bytes": 104857600, "default_config": false, "limits": [], "default_config_exists": false } 14.6.4. Pushing images To see the storage consumed, push various images to the organization. 14.6.4.1. Pushing ubuntu:18.04 Push ubuntu:18.04 to the organization from the command line: Sample commands USD podman pull ubuntu:18.04 USD podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 14.6.4.2. Using the API to view quota usage To view the storage consumed, GET data from the /api/v1/repository endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' | jq Sample output { "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false } ] } 14.6.4.3. Pushing another image Pull, tag, and push a second image, for example, nginx : Sample commands USD podman pull nginx USD podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx To view the quota report for the repositories in the organization, use the /api/v1/repository endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' Sample output { "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false }, { "namespace": "testorg", "name": "nginx", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 59231659, "configured_quota": 104857600 }, "last_modified": 1651229507, "popularity": 0, "is_starred": false } ] } To view the quota information in the organization details, use the /api/v1/organization/{orgname} endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq Sample output { "name": "testorg", ... "quotas": [ { "id": 1, "limit_bytes": 104857600, "limits": [] } ], "quota_report": { "quota_bytes": 87190725, "configured_quota": 104857600 } } 14.6.5. Rejecting pushes using quota limits If an image push exceeds defined quota limitations, a soft or hard check occurs: For a soft check, or warning , users are notified. For a hard check, or reject , the push is terminated. 14.6.5.1. Setting reject and warning limits To set reject and warning limits, POST data to the /api/v1/organization/{orgname}/quota/{quota_id}/limit endpoint: Sample reject limit command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Reject","threshold_percent":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit Sample warning limit command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Warning","threshold_percent":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit 14.6.5.2. Viewing reject and warning limits To view the reject and warning limits, use the /api/v1/organization/{orgname}/quota endpoint: View quota limits USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Sample output for quota limits [ { "id": 1, "limit_bytes": 104857600, "default_config": false, "limits": [ { "id": 2, "type": "Warning", "limit_percent": 50 }, { "id": 1, "type": "Reject", "limit_percent": 80 } ], "default_config_exists": false } ] 14.6.5.3. Pushing an image when the reject limit is exceeded In this example, the reject limit (80%) has been set to below the current repository size (~83%), so the push should automatically be rejected. Push a sample image to the organization from the command line: Sample image push USD podman pull ubuntu:20.04 USD podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 Sample output when quota exceeded Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace 14.6.5.4. Notifications for limits exceeded When limits are exceeded, a notification appears: Quota notifications 14.7. Calculating the total registry size in Red Hat Quay 3.9 Use the following procedure to queue a registry total calculation. Note This feature is done on-demand, and calculating a registry total is database intensive. Use with caution. Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged in as a Red Hat Quay superuser. Procedure On the Red Hat Quay UI, click your username Super User Admin Panel . In the navigation pane, click Manage Organizations . Click Calculate , to Total Registry Size: 0.00 KB, Updated: Never , Calculation required . Then, click Ok . After a few minutes, depending on the size of your registry, refresh the page. Now, the Total Registry Size should be calculated. For example: 14.8. Permanently deleting an image tag In some cases, users might want to delete an image tag outside of the time machine window. Use the following procedure to manually delete an image tag permanently. Important The results of the following procedure cannot be undone. Use with caution. 14.8.1. Permanently deleting an image tag using the Red Hat Quay v2 UI Use the following procedure to permanently delete an image tag using the Red Hat Quay v2 UI. Prerequisites You have set FEATURE_UI_V2 to true in your config.yaml file. Procedure Ensure that the PERMANENTLY_DELETE_TAGS and RESET_CHILD_MANIFEST_EXPIRATION parameters are set to true in your config.yaml file. For example: PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true In the navigation pane, click Repositories . Click the name of the repository, for example, quayadmin/busybox . Check the box of the image tag that will be deleted, for example, test . Click Actions Permanently Delete . Important This action is permanent and cannot be undone. 14.8.2. Permanently deleting an image tag using the Red Hat Quay legacy UI Use the following procedure to permanently delete an image tag using the Red Hat Quay legacy UI. Procedure Ensure that the PERMANENTLY_DELETE_TAGS and RESET_CHILD_MANIFEST_EXPIRATION parameters are set to true in your config.yaml file. For example: PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true On the Red Hat Quay UI, click Repositories and the name of the repository that contains the image tag you will delete, for example, quayadmin/busybox . In the navigation pane, click Tags . Check the box of the name of the tag you want to delete, for example, test . Click the Actions drop down menu and select Delete Tags Delete Tag . Click Tag History in the navigation pane. On the name of the tag that was just deleted, for example, test , click Delete test under the Permanently Delete category. For example: Permanently delete image tag Important This action is permanent and cannot be undone.
|
[
"FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 1 RESET_CHILD_MANIFEST_EXPIRATION: true",
"FEATURE_GARBAGE_COLLECTION: true FEATURE_QUOTA_MANAGEMENT: true QUOTA_BACKFILL: false QUOTA_TOTAL_DELAY_SECONDS: 0 PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true",
"QUOTA_BACKFILL: true",
"podman pull ubuntu:18.04",
"podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag1",
"podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag2",
"podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag1",
"podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag2",
"podman pull ubuntu:18.04",
"podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04",
"podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04",
"podman pull nginx",
"podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx",
"podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx",
"podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04",
"Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[]",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"limit_bytes\": 10485760}' https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg/quota | jq",
"\"Created\"",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[ { \"id\": 1, \"limit_bytes\": 10485760, \"default_config\": false, \"limits\": [], \"default_config_exists\": false } ]",
"curl -k -X PUT -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"limit_bytes\": 104857600}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1 | jq",
"{ \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [], \"default_config_exists\": false }",
"podman pull ubuntu:18.04 podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' | jq",
"{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false } ] }",
"podman pull nginx podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true'",
"{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false }, { \"namespace\": \"testorg\", \"name\": \"nginx\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 59231659, \"configured_quota\": 104857600 }, \"last_modified\": 1651229507, \"popularity\": 0, \"is_starred\": false } ] }",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq",
"{ \"name\": \"testorg\", \"quotas\": [ { \"id\": 1, \"limit_bytes\": 104857600, \"limits\": [] } ], \"quota_report\": { \"quota_bytes\": 87190725, \"configured_quota\": 104857600 } }",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Reject\",\"threshold_percent\":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Warning\",\"threshold_percent\":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[ { \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [ { \"id\": 2, \"type\": \"Warning\", \"limit_percent\": 50 }, { \"id\": 1, \"type\": \"Reject\", \"limit_percent\": 80 } ], \"default_config_exists\": false } ]",
"podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04",
"Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace",
"PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true",
"PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/red-hat-quay-quota-management-and-enforcement
|
Chapter 11. Generating a diagnostic bundle
|
Chapter 11. Generating a diagnostic bundle You can generate a diagnostic bundle and send that data to enable the support team to provide insights into the status and health of Red Hat Advanced Cluster Security for Kubernetes components. Red Hat might request you to send the diagnostic bundle during investigation of your issues with Red Hat Advanced Cluster Security for Kubernetes. You can generate a diagnostic bundle and inspect its data before sending. Note The diagnostic bundle is unencrypted, and depending upon the number of clusters in your environment, the bundle size is between 100 KB and 1 MB. Always use an encrypted channel to transfer this data back to Red Hat. 11.1. Diagnostic bundle data When you generate a diagnostic bundle, it includes the following data: Central heap profile. System logs: Logs of all Red Hat Advanced Cluster Security for Kubernetes components (for the last 20 minutes) and logs of recently crashed components (from up to 20 minutes before the crash). System logs depend on the size of your environment. For large deployments, data includes log files for components with critical errors only, such as a high restart count. YAML definitions for Red Hat Advanced Cluster Security for Kubernetes components: This data does not include Kubernetes secrets. OpenShift Container Platform or Kubernetes events: Details about the events that relate to the objects in the stackrox namespace. Online Telemetry data, which includes: Storage information: Details about the database size and the amount of free space available in attached volumes. Red Hat Advanced Cluster Security for Kubernetes components health information: Details about Red Hat Advanced Cluster Security for Kubernetes components versions, their memory usage, and any reported errors. Coarse-grained usage statistics: Details about API endpoint invocation counts and reported error statuses. It does not include the actual data sent in API requests. Nodes information: Details about the nodes in each secured cluster. It includes kernel and operating system versions, resource pressure, and taints. Environment information: Details about each secured cluster, including Kubernetes or OpenShift Container Platform version, Istio version (if applicable), cloud provider type and other similar information. 11.2. Generating a diagnostic bundle by using the RHACS portal You can generate a diagnostic bundle by using the system health dashboard in the RHACS portal. Prerequisites To generate a diagnostic bundle, you need read permission for the Administration resource. Procedure In the RHACS portal, select Platform Configuration System Health . On the System Health view header, click Generate Diagnostic Bundle . For the Filter by clusters drop-down menu, select the clusters for which you want to generate the diagnostic data. For Filter by starting time , specify the date and time (in UTC format) from which you want to include the diagnostic data. Click Download Diagnostic Bundle . 11.3. Generating a diagnostic bundle by using the roxctl CLI You can generate a diagnostic bundle with the Red Hat Advanced Cluster Security for Kubernetes (RHACS) administrator password or API token and central address by using the roxctl CLI. Prerequisites To generate a diagnostic bundle, you need read permission for the Administration resource. You must have configured the RHACS administrator password or API token and central address. Procedure To generate a diagnostic bundle by using the RHACS administrator password, perform the following steps: Run the following command to configure the ROX_PASSWORD and ROX_CENTRAL_ADDRESS environment variables: USD export ROX_PASSWORD= <rox_password> && export ROX_CENTRAL_ADDRESS= <address>:<port_number> 1 1 For <rox_password> , specify the RHACS administrator password. Run the following command to generate a diagnostic bundle by using the RHACS administrator password: USD roxctl -e "USDROX_CENTRAL_ADDRESS" -p "USDROX_PASSWORD" central debug download-diagnostics To generate a diagnostic bundle by using the API token, perform the following steps: Run the following command to configure the ROX_API_TOKEN environment variable: USD export ROX_API_TOKEN= <api_token> Run the following command to generate a diagnostic bundle by using the API token: USD roxctl -e "USDROX_CENTRAL_ADDRESS" central debug download-diagnostics
|
[
"export ROX_PASSWORD= <rox_password> && export ROX_CENTRAL_ADDRESS= <address>:<port_number> 1",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" -p \"USDROX_PASSWORD\" central debug download-diagnostics",
"export ROX_API_TOKEN= <api_token>",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug download-diagnostics"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/configuring/generate-diagnostic-bundle
|
Chapter 5. Managing Kickstart and Configuration Files Using authconfig
|
Chapter 5. Managing Kickstart and Configuration Files Using authconfig The --update option updates all of the configuration files with the configuration changes. There are a couple of alternative options with slightly different behavior: --kickstart writes the updated configuration to a kickstart file. --test displays the full configuration with changes, but does not edit any configuration files. Additionally, authconfig can be used to back up and restore configurations. All archives are saved to a unique subdirectory in the /var/lib/authconfig/ directory. For example, the --savebackup option gives the backup directory as 2011-07-01 : This backs up all of the authentication configuration files beneath the /var/lib/authconfig/backup-2011-07-01 directory. Any of the saved backups can be used to restore the configuration using the --restorebackup option, giving the name of the manually saved configuration: Additionally, authconfig automatically makes a backup of the configuration before it applies any changes (with the --update option). The configuration can be restored from the most recent automatic backup, without having to specify the exact backup, using the --restorelastbackup option.
|
[
"authconfig --savebackup=2011-07-01",
"authconfig --restorebackup=2011-07-01"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/authconfig-kickstart-cmd
|
Chapter 15. Scaling the Ceph Storage cluster
|
Chapter 15. Scaling the Ceph Storage cluster You can scale the size of your Ceph Storage cluster by adding or removing storage nodes. 15.1. Scaling up the Ceph Storage cluster As capacity and performance requirements change, you can scale up your Ceph Storage cluster to meet increased demands. Before doing so, ensure that you have enough nodes for the updated deployment. Then you can register and tag the new nodes in your Red Hat OpenStack Platform (RHOSP) environment. This procedure results in the following actions: The storage networks and firewall rules are configured on the new CephStorage nodes. The ceph-admin user is created on the new CephStorage nodes. The ceph-admin user public SSH key is distributed to the new CephStorage nodes so that cephadm can use SSH to add extra nodes. If a new CephMon or CephMgr node is added, the ceph-admin private SSH key is also distributed to that node. The updated Ceph specification is applied and cephadm schedules the new nodes to join the Ceph cluster. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Modify the ~/overcloud-baremetal-deploy.yaml to add the CephStorage nodes to the deployment. The following example file represents an original deployment with three CephStorage nodes. The following example modifies this file to add three additional nodes. Use the openstack overcloud node provision command with the updated ~/overcloud-baremetal-deploy.yaml file. Note This command will provision the configured nodes and and output an updated copy of ~/overcloud-baremetal-deployed.yaml . The new version updates the CephStorage role. The DeployedServerPortMap and HostnameMap also contains the new storage nodes. Use the openstack overcloud ceph spec command to generate a Ceph specification file. Note The files used in the openstack overcloud ceph spec should already be available for use. They are created in the following locations: The overcloud-baremetal-deployed.yaml file was created in the step of this procedure. The osd_spec.yaml file was created in Configuring advanced OSD specifications . Providing the OSD specification with the --osd-spec parameter is optional. The roles_data.yaml file was created in Designating nodes for Red Hat Ceph Storage . It is assumed the new nodes are assigned to one of the roles in this file. The output of this command will be the ceph_spec.yaml file. Use the openstack overcloud ceph user enable command to create the ceph-admin user on all nodes in the cluster. The ceph-admin user must be present on all nodes to enable SSH access to a node by the Ceph orchestrator. Note Use the ceph_spec.yaml file created in the step. Copy the ceph_spec.yaml file to the controller-0 node. Log in to the controller-0 node. Mount the ceph_spec.yaml file : Replace <spec_file_path> with the fully qualified path and file name of the ceph_spec.yaml file. Use the orchestrator to apply ceph_spec.yaml : Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Optional: Use the openstack overcloud deploy command with the updated ~/overcloud-baremetal-deployed.yaml file. Note This step is only necessary if you are scaling out a ComputeHCI node. It is not necessary if you are scaling out a CephStorage node. 15.2. Scaling down and replacing Red Hat Ceph Storage nodes In some cases, you might need to scale down your Red Hat Ceph Storage cluster or replace a Red Hat Ceph Storage node. In either situation, you must disable and rebalance the Red Hat Ceph Storage nodes that you want to remove from the overcloud to prevent data loss. Procedure Do not proceed with this procedure if the Red Hat Ceph Storage cluster does not have the capacity to lose OSDs. Log in to the overcloud Controller node as the tripleo-admin user. Use the sudo cephadm shell command to start a Ceph shell. Use the ceph osd tree command to identify OSDs to be removed by server. In the following example we want to identify the OSDs of ceph-2 host. Exit the cephadm shell Export the Ceph cluster specification to a YAML file. Edit the specification file exported in the step. Remove all the occurrences of the scaled down node from the placement:hosts section of the spec.yaml file. Save the edited file. Apply the modified Ceph specification file. Important If you do not export and edit the Ceph specification file before removing the OSDs, the Ceph Manager will attempt to recreate the OSDs. Use the sudo cephadm shell command to start a Ceph shell. Use the command ceph orch osd rm --zap <osd_list> to remove the OSDs. Use the command ceph orch osd status to check the status of OSD removal. Warning Do not proceed with the step until this command returns no results. Use the command ceph orch host drain <HOST> to drain any remaining daemons. Use the command ceph orch host rm <HOST> to remove the host. Note This node is no longer used by the Ceph cluster but is still managed by director as a bare-metal node. End the Ceph shell session. Note If scaling down the Ceph cluster is temporary and the nodes removed will be restored later, the scaling up action can increment the count and set provisioned: true on nodes that were previously set provisioned: false . If the node will never reused, it can be set provisioned: false indefinitely and the scaling up action can specify a new instances entry. The following file sample provides some examples of each instance. To remove the node from director, see Scaling down bare-metal nodes in Installing and managing Red Hat OpenStack Platform with director .
|
[
"source ~/stackrc",
"- name: CephStorage count: 3 instances: - hostname: ceph-0 name: ceph-0 - hostname: ceph-1 name: ceph-2 - hostname: ceph-2 name: ceph-2",
"- name: CephStorage count: 6 instances: - hostname: ceph-0 name: ceph-0 - hostname: ceph-1 name: ceph-2 - hostname: ceph-2 name: ceph-2 - hostname: ceph-3 name: ceph-3 - hostname: ceph-4 name: ceph-4 - hostname: ceph-5 name: ceph-5",
"openstack overcloud node provision --stack overcloud --network-config --output ~/overcloud-baremetal-deployed.yaml ~/overcloud-baremetal-deploy.yaml",
"openstack overcloud ceph spec ~/overcloud-baremetal-deployed.yaml --osd-spec osd_spec.yaml --roles-data roles_data.yaml -o ceph_spec.yaml",
"openstack overcloud ceph user enable ceph_spec.yaml",
"cephadm shell -m <spec_file_path>",
"ceph orch apply -i /mnt/ceph_spec.yaml",
"source ~/stackrc",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e deployed_ceph.yaml -e overcloud-baremetal-deploy.yaml",
"ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -7 0.19519 host ceph-2 5 hdd 0.04880 osd.5 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 9 hdd 0.04880 osd.9 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000",
"sudo cephadm shell -- ceph orch ls --export > spec.yaml",
"sudo cephadm shell -m spec.yaml -- ceph orch apply -i /mnt/spec.yaml",
"ceph orch osd rm --zap 5 7 9 11 Scheduled OSD(s) for removal ceph orch osd rm status OSD_ID HOST STATE PG_COUNT REPLACE FORCE DRAIN_STARTED_AT 7 ceph-2 draining 27 False False 2021-04-23 21:35:51.215361 9 ceph-2 draining 8 False False 2021-04-23 21:35:49.111500 11 ceph-2 draining 14 False False 2021-04-23 21:35:50.243762",
"ceph orch osd rm status OSD_ID HOST STATE PG_COUNT REPLACE FORCE DRAIN_STARTED_AT 7 ceph-2 draining 34 False False 2021-04-23 21:35:51.215361 11 ceph-2 draining 14 False False 2021-04-23 21:35:50.243762",
"ceph orch host drain ceph-2",
"ceph orch host rm ceph-2",
"- name: Compute count: 2 instances: - hostname: overcloud-compute-0 name: node10 # Removed from deployment due to disk failure provisioned: false - hostname: overcloud-compute-1 name: node11 - hostname: overcloud-compute-2 name: node12"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_scaling-the-ceph-storage-cluster_deployingcontainerizedrhcs
|
Chapter 2. Prerequisites
|
Chapter 2. Prerequisites Installer-provisioned installation of OpenShift Container Platform requires: One provisioner node with Red Hat Enterprise Linux (RHEL) 9.x installed. The provisioner can be removed after installation. Three control plane nodes Baseboard management controller (BMC) access to each node At least one network: One required routable network One optional provisioning network One optional management network Before starting an installer-provisioned installation of OpenShift Container Platform, ensure the hardware environment meets the following requirements. 2.1. Node requirements Installer-provisioned installation involves a number of hardware node requirements: CPU architecture: All nodes must use x86_64 or aarch64 CPU architecture. Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration. Baseboard Management Controller: The provisioner node must be able to access the baseboard management controller (BMC) of each OpenShift Container Platform cluster node. You may use IPMI, Redfish, or a proprietary protocol. Latest generation: Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, RHEL 9.x ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support RHEL 9.x for the provisioner node and RHCOS 9.x for the control plane and worker nodes. Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node. Provisioner node: Installer-provisioned installation requires one provisioner node. Control plane: Installer-provisioned installation requires three control plane nodes for high availability. You can deploy an OpenShift Container Platform cluster with only three control plane nodes, making the control plane nodes schedulable as worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing. Worker nodes: While not required, a typical production cluster has two or more worker nodes. Important Do not deploy a cluster with only one worker node, because the cluster will deploy with routers and ingress traffic in a degraded state. Network interfaces: Each node must have at least one network interface for the routable baremetal network. Each node must have one network interface for a provisioning network when using the provisioning network for deployment. Using the provisioning network is the default configuration. Note Only one network card (NIC) on the same subnet can route traffic through the gateway. By default, Address Resolution Protocol (ARP) uses the lowest numbered NIC. Use a single NIC for each node in the same subnet to ensure that network load balancing works as expected. When using multiple NICs for a node in the same subnet, use a single bond or team interface. Then add the other IP addresses to that interface in the form of an alias IP address. If you require fault tolerance or load balancing at the network interface level, use an alias IP address on the bond or team interface. Alternatively, you can disable a secondary NIC on the same subnet or ensure that it has no IP address. Unified Extensible Firmware Interface (UEFI): Installer-provisioned installation requires UEFI boot on all OpenShift Container Platform nodes when using IPv6 addressing on the provisioning network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the provisioning network NIC, but omitting the provisioning network removes this requirement. Important When starting the installation from virtual media such as an ISO image, delete all old UEFI boot table entries. If the boot table includes entries that are not generic entries provided by the firmware, the installation might fail. Secure Boot: Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. You may deploy with Secure Boot manually or managed. Manually: To deploy an OpenShift Container Platform cluster with Secure Boot manually, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot with manually enabled UEFI and Secure Boot only when installer-provisioned installations use Redfish virtual media. See "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section for additional details. Managed: To deploy an OpenShift Container Platform cluster with managed Secure Boot, you must set the bootMode value to UEFISecureBoot in the install-config.yaml file. Red Hat only supports installer-provisioned installation with managed Secure Boot on 10th generation HPE hardware and 13th generation Dell hardware running firmware version 2.75.75.75 or greater. Deploying with managed Secure Boot does not require Redfish virtual media. See "Configuring managed Secure Boot" in the "Setting up the environment for an OpenShift installation" section for details. Note Red Hat does not support managing self-generated keys, or other keys, for Secure Boot. 2.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.1. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHEL 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 2.3. Planning a bare metal cluster for OpenShift Virtualization If you will use OpenShift Virtualization, it is important to be aware of several requirements before you install your bare metal cluster. If you want to use live migration features, you must have multiple worker nodes at the time of cluster installation . This is because live migration requires the cluster-level high availability (HA) flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed afterwards. If there are fewer than two worker nodes defined when you install your cluster, the HA flag is set to false for the life of the cluster. Note You can install OpenShift Virtualization on a single-node cluster, but single-node OpenShift does not support high availability. Live migration requires shared storage. Storage for OpenShift Virtualization must support and use the ReadWriteMany (RWX) access mode. If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface controllers (NICs) are supported by OpenShift Container Platform. Additional resources Preparing your cluster for OpenShift Virtualization About Single Root I/O Virtualization (SR-IOV) hardware networks Connecting a virtual machine to an SR-IOV network 2.4. Firmware requirements for installing with virtual media The installation program for installer-provisioned OpenShift Container Platform clusters validates the hardware and firmware compatibility with Redfish virtual media. The installation program does not begin installation on a node if the node firmware is not compatible. The following tables list the minimum firmware versions tested and verified to work for installer-provisioned OpenShift Container Platform clusters deployed by using Redfish virtual media. Note Red Hat does not test every combination of firmware, hardware, or other third-party components. For further information about third-party support, see Red Hat third-party support policy . For information about updating the firmware, see the hardware documentation for the nodes or contact the hardware vendor. Table 2.2. Firmware compatibility for HP hardware with Redfish virtual media Model Management Firmware versions 11th Generation iLO6 1.57 or later 10th Generation iLO5 2.63 or later Table 2.3. Firmware compatibility for Dell hardware with Redfish virtual media Model Management Firmware versions 16th Generation iDRAC 9 v7.10.70.00 15th Generation iDRAC 9 v6.10.30.00 and v7.10.70.00 14th Generation iDRAC 9 v6.10.30.00 Table 2.4. Firmware compatibility for Cisco UCS hardware with Redfish virtual media Model Management Firmware versions UCS UCSX-210C-M6 CIMC 5.2(2) or later Additional resources Unable to discover new bare metal hosts using the BMC 2.5. Network requirements Installer-provisioned installation of OpenShift Container Platform involves multiple network requirements. First, installer-provisioned installation involves an optional non-routable provisioning network for provisioning the operating system on each bare-metal node. Second, installer-provisioned installation involves a routable baremetal network. 2.5.1. Ensuring required ports are open Certain ports must be open between cluster nodes for installer-provisioned installations to complete successfully. In certain situations, such as using separate subnets for far edge worker nodes, you must ensure that the nodes in these subnets can communicate with nodes in the other subnets on the following required ports. Table 2.5. Required ports Port Description 67 , 68 When using a provisioning network, cluster nodes access the dnsmasq DHCP server over their provisioning network interfaces using ports 67 and 68 . 69 When using a provisioning network, cluster nodes communicate with the TFTP server on port 69 using their provisioning network interfaces. The TFTP server runs on the bootstrap VM. The bootstrap VM runs on the provisioner node. 80 When not using the image caching option or when using virtual media, the provisioner node must have port 80 open on the baremetal machine network interface to stream the Red Hat Enterprise Linux CoreOS (RHCOS) image from the provisioner node to the cluster nodes. 123 The cluster nodes must access the NTP server on port 123 using the baremetal machine network. 5050 The Ironic Inspector API runs on the control plane nodes and listens on port 5050 . The Inspector API is responsible for hardware introspection, which collects information about the hardware characteristics of the bare-metal nodes. 5051 Port 5050 uses port 5051 as a proxy. 6180 When deploying with virtual media and not using TLS, the provisioner node and the control plane nodes must have port 6180 open on the baremetal machine network interface so that the baseboard management controller (BMC) of the worker nodes can access the RHCOS image. Starting with OpenShift Container Platform 4.13, the default HTTP port is 6180 . 6183 When deploying with virtual media and using TLS, the provisioner node and the control plane nodes must have port 6183 open on the baremetal machine network interface so that the BMC of the worker nodes can access the RHCOS image. 6385 The Ironic API server runs initially on the bootstrap VM and later on the control plane nodes and listens on port 6385 . The Ironic API allows clients to interact with Ironic for bare-metal node provisioning and management, including operations such as enrolling new nodes, managing their power state, deploying images, and cleaning the hardware. 6388 Port 6385 uses port 6388 as a proxy. 8080 When using image caching without TLS, port 8080 must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes. 8083 When using the image caching option with TLS, port 8083 must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes. 9999 By default, the Ironic Python Agent (IPA) listens on TCP port 9999 for API calls from the Ironic conductor service. Communication between the bare-metal node where IPA is running and the Ironic conductor service uses this port. 2.5.2. Increase the network MTU Before deploying OpenShift Container Platform, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation. 2.5.3. Configuring NICs OpenShift Container Platform deploys with two networks: provisioning : The provisioning network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. The network interface for the provisioning network on each cluster node must have the BIOS or UEFI configured to PXE boot. The provisioningNetworkInterface configuration setting specifies the provisioning network NIC name on the control plane nodes, which must be identical on the control plane nodes. The bootMACAddress configuration setting provides a means to specify a particular NIC on each node for the provisioning network. The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . baremetal : The baremetal network is a routable network. You can use any NIC to interface with the baremetal network provided the NIC is not configured to use the provisioning network. Important When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network. 2.5.4. DNS requirements Clients access the OpenShift Container Platform cluster nodes over the baremetal network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name. <cluster_name>.<base_domain> For example: test-cluster.example.com OpenShift Container Platform includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. CoreDNS requires both TCP and UDP connections to the upstream DNS server to function correctly. Ensure the upstream DNS server can receive both TCP and UDP connections from OpenShift Container Platform cluster nodes. In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard ingress API A/AAAA records are used for name resolution and PTR records are used for reverse name resolution. Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records or DHCP to set the hostnames for all the nodes. Installer-provisioned installation includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. An A/AAAA record and a PTR record identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Routes *.apps.<cluster_name>.<base_domain>. The wildcard A/AAAA record refers to the application ingress load balancer. The application ingress load balancer targets the nodes that run the Ingress Controller pods. The Ingress Controller pods run on the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Tip You can use the dig command to verify DNS resolution. 2.5.5. Dynamic Host Configuration Protocol (DHCP) requirements By default, installer-provisioned installation deploys ironic-dnsmasq with DHCP enabled for the provisioning network. No other DHCP servers should be running on the provisioning network when the provisioningNetwork configuration setting is set to managed , which is the default value. If you have a DHCP server running on the provisioning network, you must set the provisioningNetwork configuration setting to unmanaged in the install-config.yaml file. Network administrators must reserve IP addresses for each node in the OpenShift Container Platform cluster for the baremetal network on an external DHCP server. 2.5.6. Reserving IP addresses for nodes with the DHCP server For the baremetal network, a network administrator must reserve several IP addresses, including: Two unique virtual IP addresses. One virtual IP address for the API endpoint. One virtual IP address for the wildcard ingress endpoint. One IP address for the provisioner node. One IP address for each control plane node. One IP address for each worker node, if applicable. Reserving IP addresses so they become static IP addresses Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To configure static IP addresses with NMState, see "(Optional) Configuring node network interfaces" in the "Setting up the environment for an OpenShift installation" section. Networking between external load balancers and control plane nodes External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes. Important The storage interface requires a DHCP reservation or a static IP. The following table provides an exemplary embodiment of fully qualified domain names. The API and name server addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer. Usage Host Name IP API api.<cluster_name>.<base_domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<base_domain> <ip> Provisioner node provisioner.<cluster_name>.<base_domain> <ip> Control-plane-0 openshift-control-plane-0.<cluster_name>.<base_domain> <ip> Control-plane-1 openshift-control-plane-1.<cluster_name>-.<base_domain> <ip> Control-plane-2 openshift-control-plane-2.<cluster_name>.<base_domain> <ip> Worker-0 openshift-worker-0.<cluster_name>.<base_domain> <ip> Worker-1 openshift-worker-1.<cluster_name>.<base_domain> <ip> Worker-n openshift-worker-n.<cluster_name>.<base_domain> <ip> Note If you do not create DHCP reservations, the installation program requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes. 2.5.7. Provisioner node requirements You must specify the MAC address for the provisioner node in your installation configuration. The bootMacAddress specification is typically associated with PXE network booting. However, the Ironic provisioning service also requires the bootMacAddress specification to identify nodes during the inspection of the cluster, or during node redeployment in the cluster. The provisioner node requires layer 2 connectivity for network booting, DHCP and DNS resolution, and local network communication. The provisioner node requires layer 3 connectivity for virtual media booting. 2.5.8. Network Time Protocol (NTP) Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL/TLS certificates that require validation, which might fail if the date and time between the nodes are not in sync. Important Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail. You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes. 2.5.9. Port access for the out-of-band management IP address The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the provisioner node during installation, the out-of-band management IP address must be granted access to port 6180 on the provisioner node and on the OpenShift Container Platform control plane nodes. TLS port 6183 is required for virtual media installation, for example, by using Redfish. Additional resources Using DNS forwarding 2.6. Configuring nodes Configuring nodes when using the provisioning network Each node in the cluster requires the following configuration for proper installation. Warning A mismatch between nodes will cause an installation failure. While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. In the following table, NIC1 is a non-routable network ( provisioning ) that is only used for the installation of the OpenShift Container Platform cluster. NIC Network VLAN NIC1 provisioning <provisioning_vlan> NIC2 baremetal <baremetal_vlan> The Red Hat Enterprise Linux (RHEL) 9.x installation process on the provisioner node might vary. To install Red Hat Enterprise Linux (RHEL) 9.x using a local Satellite server or a PXE server, PXE-enable NIC2. PXE Boot order NIC1 PXE-enabled provisioning network 1 NIC2 baremetal network. PXE-enabled is optional. 2 Note Ensure PXE is disabled on all other NICs. Configure the control plane and worker nodes as follows: PXE Boot order NIC1 PXE-enabled (provisioning network) 1 Configuring nodes without the provisioning network The installation process requires one NIC: NIC Network VLAN NICx baremetal <baremetal_vlan> NICx is a routable network ( baremetal ) that is used for the installation of the OpenShift Container Platform cluster, and routable to the internet. Important The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . Configuring nodes for Secure Boot manually Secure Boot prevents a node from booting unless it verifies the node is using only trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. Note Red Hat only supports manually configured Secure Boot when deploying with Redfish virtual media. To enable Secure Boot manually, refer to the hardware guide for the node and execute the following: Procedure Boot the node and enter the BIOS menu. Set the node's boot mode to UEFI Enabled . Enable Secure Boot. Important Red Hat does not support Secure Boot with self-generated keys. 2.7. Out-of-band management Nodes typically have an additional NIC used by the baseboard management controllers (BMCs). These BMCs must be accessible from the provisioner node. Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful OpenShift Container Platform installation. The out-of-band management setup is out of scope for this document. Using a separate management network for out-of-band management can enhance performance and improve security. However, using the provisioning network or the bare metal network are valid options. Note The bootstrap VM features a maximum of two network interfaces. If you configure a separate management network for out-of-band management, and you are using a provisioning network, the bootstrap VM requires routing access to the management network through one of the network interfaces. In this scenario, the bootstrap VM can then access three networks: the bare metal network the provisioning network the management network routed through one of the network interfaces 2.8. Required data for installation Prior to the installation of the OpenShift Container Platform cluster, gather the following information from all cluster nodes: Out-of-band management IP Examples Dell (iDRAC) IP HP (iLO) IP Fujitsu (iRMC) IP When using the provisioning network NIC ( provisioning ) MAC address NIC ( baremetal ) MAC address When omitting the provisioning network NIC ( baremetal ) MAC address 2.9. Validation checklist for nodes When using the provisioning network ❏ NIC1 VLAN is configured for the provisioning network. ❏ NIC1 for the provisioning network is PXE-enabled on the provisioner, control plane, and worker nodes. ❏ NIC2 VLAN is configured for the baremetal network. ❏ PXE has been disabled on all other NICs. ❏ DNS is configured with API and Ingress endpoints. ❏ Control plane and worker nodes are configured. ❏ All nodes accessible via out-of-band management. ❏ (Optional) A separate management network has been created. ❏ Required data for installation. When omitting the provisioning network ❏ NIC1 VLAN is configured for the baremetal network. ❏ DNS is configured with API and Ingress endpoints. ❏ Control plane and worker nodes are configured. ❏ All nodes accessible via out-of-band management. ❏ (Optional) A separate management network has been created. ❏ Required data for installation.
|
[
"<cluster_name>.<base_domain>",
"test-cluster.example.com"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-prerequisites
|
12.2. Managing Object Identifiers
|
12.2. Managing Object Identifiers Each LDAP object class or attribute must be assigned a unique name and object identifier (OID). An OID is a dot-separated number which identifies the schema element to the server. OIDs can be hierarchical, with a base OID that can be expanded to accommodate different branches. For example, the base OID could be 1 , and there can be a branch for attributes at 1.1 and for object classes at 1.2 . Note It is not required to have a numeric OID for creating custom schema, but Red Hat strongly recommends it for better forward compatibility and performance. OIDs are assigned to an organization through the Internet Assigned Numbers Authority (IANA), and Directory Server does not provide a mechanism to obtain OIDs. To get information about obtaining OIDs, visit the IANA website at http://www.iana.org/cgi-bin/enterprise.pl . After obtaining a base OID from IANA, plan how the OIDs are going to be assigned to custom schema elements. Define a branch for both attributes and object classes; there can also be branches for matching rules and LDAP controls. Once the OID branches are defined, create an OID registry to track OID assignments. An OID registry is a list that gives the OIDs and descriptions of the OIDs used in the directory schema. This ensures that no OID is ever used for more than one purpose. Publish the OID registry with the custom schema.
| null |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/customizing_the_schema-getting_and_assigning_object_identifiers
|
Chapter 3. Designing the Directory Schema
|
Chapter 3. Designing the Directory Schema The site survey conducted in Chapter 2, Planning the Directory Data revealed information about the data which will be stored in the directory. The directory schema describes the types of data in the directory, so determining what schema to use reflects decisions on how to represent the data stored in the directory. During the schema design process, each data element is mapped to an LDAP attribute, and related elements are gathered into LDAP object classes. A well-designed schema helps to maintain the integrity of the directory data. This chapter describes the directory schema and how to design a schema for unique organizational needs. For information on replicating a schema, see Section 7.4.4, "Schema Replication" . 3.1. Schema Design Process Overview During the schema design process, select and define the object classes and attributes used to represent the entries stored by Red Hat Directory Server. Schema design involves the following steps: Choosing predefined schema elements to meet as many of data needs as possible. Extending the standard Directory Server schema to define new elements to meet other remaining needs. Planning for schema maintenance. The simplest and most easily-maintained option is to use existing schema elements defined in the standard schema provided with Directory Server. Choosing standard schema elements helps ensure compatibility with directory-enabled applications. Because the schema is based on the LDAP standard, it has been reviewed and agreed to by a wide number of directory users.
| null |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_the_directory_schema
|
Network APIs
|
Network APIs OpenShift Container Platform 4.16 Reference guide for network APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_apis/index
|
4.2. Installing NFS-Ganesha during an ISO Installation
|
4.2. Installing NFS-Ganesha during an ISO Installation For more information about installing Red Hat Gluster Storage using an ISO image, see Section 2.2, "Installing from an ISO Image" . While installing Red Hat Storage using an ISO, in the Customizing the Software Selection screen, select RH-Gluster-NFS-Ganesha and click . Proceed with the remaining installation steps for installing Red Hat Gluster Storage. For more information on how to install Red Hat Storage using an ISO, see Installing from an ISO Image.
| null |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/ch04s02
|
Chapter 1. Diagnosing the problem
|
Chapter 1. Diagnosing the problem To start troubleshooting Ansible Automation Platform, use the must-gather command on OpenShift Container Platform or the sos utility on a VM-based installation to collect configuration and diagnostic information. You can attach the output of these utilities to your support case. 1.1. Troubleshooting Ansible Automation Platform on OpenShift Container Platform by using the must-gather command The oc adm must-gather command line interface (CLI) command collects information from your Ansible Automation Platform installation deployed on OpenShift Container Platform. It gathers information that is often needed for debugging issues, including resource definitions and service logs. Running the oc adm must-gather CLI command creates a new directory containing the collected data that you can use to troubleshoot or attach to your support case. If your OpenShift environment does not have access to registry.redhat.io and you cannot run the must-gather command, then run the oc adm inspect command instead. Prerequisites The OpenShift CLI ( oc ) is installed. Procedure Log in to your cluster: oc login <openshift_url> Run one of the following commands based on your level of access in the cluster: Run must-gather across the entire cluster: oc adm must-gather --image=registry.redhat.io/ansible-automation-platform-25/aap-must-gather-rhel8 --dest-dir <dest_dir> --image specifies the image that gathers data --dest-dir specifies the directory for the output Run must-gather for a specific namespace in the cluster: oc adm must-gather --image=registry.redhat.io/ansible-automation-platform-25/aap-must-gather-rhel8 --dest-dir <dest_dir> - /usr/bin/ns-gather <namespace> - /usr/bin/ns-gather limits the must-gather data collection to a specified namespace To attach the must-gather archive to your support case, create a compressed file from the must-gather directory created before and attach it to your support case. For example, on a computer that uses a Linux operating system, run the following command, replacing <must-gather-local.5421342344627712289/> with the must-gather directory name: USD tar cvaf must-gather.tar.gz <must-gather.local.5421342344627712289/> Additional resources For information about installing the OpenShift CLI ( oc ), see Installing the OpenShift CLI in the OpenShift Container Platform Documentation. For information about running the oc adm inspect command, see the ocm adm inspect section in the OpenShift Container Platform Documentation. 1.2. Troubleshooting Ansible Automation Platform on VM-based installations by generating an sos report The sos utility collects configuration, diagnostic, and troubleshooting data from your Ansible Automation Platform on a VM-based installation. For more information about installing and using the sos utility, see Generating an sos report for technical support .
|
[
"login <openshift_url>",
"adm must-gather --image=registry.redhat.io/ansible-automation-platform-25/aap-must-gather-rhel8 --dest-dir <dest_dir>",
"adm must-gather --image=registry.redhat.io/ansible-automation-platform-25/aap-must-gather-rhel8 --dest-dir <dest_dir> - /usr/bin/ns-gather <namespace>",
"tar cvaf must-gather.tar.gz <must-gather.local.5421342344627712289/>"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/troubleshooting_ansible_automation_platform/diagnosing-the-problem
|
4.5. Testing the Resource Configuration
|
4.5. Testing the Resource Configuration If the Samba configuration was successful, you should be able to mount the Samba share on a node in the cluster. The following example procedure mounts a Samba share. Add an existing user in the cluster node to the smbpasswd file and assign a password. In the following example, there is an existing user smbuser . Mount the Samba share: Check whether the file system is mounted: To check for Samba recovery, perform the following procedure. Manually stop the CTDB resource with the following command: After you stop the resource, the system should recover the service. Check the cluster status with the pcs status command. You should see that the ctdb-clone resource has started, but you will also see a ctdb_monitor failure. To clear this error from the status, enter the following command on one of the cluster nodes:
|
[
"smbpasswd -a smbuser New SMB password: Retype new SMB password: Added user smbuser",
"mkdir /mnt/sambashare mount -t cifs -o user=smbuser //198.162.1.151/public /mnt/sambashare Password for smbuser@//198.162.1.151/public: ********",
"mount | grep /mnt/sambashare //198.162.1.151/public on /mnt/sambashare type cifs (rw,relatime,vers=1.0,cache=strict,username=smbuser,domain=LINUXSERVER,uid=0,noforceuid,gid=0,noforcegid,addr=10.37.167.205,unix,posixpaths,serverino,mapposix,acl,rsize=1048576,wsize=65536,echo_interval=60,actimeo=1)",
"pcs resource debug-stop ctdb",
"pcs status Clone Set: ctdb-clone [ctdb] Started: [ z1.example.com z2.example.com ] Failed Actions: * ctdb_monitor_10000 on z1.example.com 'unknown error' (1): call=126, status=complete, exitreason='CTDB status call failed: connect() failed, errno=111', last-rc-change='Thu Oct 19 18:39:51 2017', queued=0ms, exec=0ms",
"pcs resource cleanup ctdb-clone"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-unittestsamba-haaa
|
18.2. Overview of the System z Installation Procedure
|
18.2. Overview of the System z Installation Procedure You can install Red Hat Enterprise Linux on System z interactively or in unattended mode. Installation on System z differs from installation on other architectures in that it is typically performed over a network and not from a local DVD. The installation can be summarized as follows: Booting (IPL) the installer Connect with the mainframe, then perform an initial program load (IPL), or boot, from the medium containing the installation program. Installation Phase 1 Set up an initial network device. This network device is then used to connect to the installation system via SSH or VNC. This gets you a full-screen mode terminal or graphical display to continue installation as on other architectures. Installation Phase 2 Specify which language to use, and how and where the installation program and the software packages to be installed from the repository on the Red Hat installation medium can be found. Installation Phase 3 Use anaconda (the main part of the Red Hat installation program) to perform the rest of the installation. Figure 18.1. The Installation Process 18.2.1. Booting (IPL) the Installer After establishing a connection with the mainframe, you need to perform an initial program load (IPL), or boot, from the medium containing the installation program. This document describes the most common methods of installing Red Hat Enterprise Linux 6.9 on System z. In general, you can use any method to boot the Linux installation system, which consists of a kernel ( kernel.img ) and initial ramdisk ( initrd.img ) with at least the parameters in generic.prm . The Linux installation system is also called the installer in this book. The control point from where you can start the IPL process depends on the environment where your Linux is to run. If your Linux is to run as a z/VM guest operating system, the control point is the control program (CP) of the hosting z/VM. If your Linux is to run in LPAR mode, the control point is the mainframe's Support Element (SE) or an attached IBM System z Hardware Management Console (HMC). You can use the following boot media only if Linux is to run as a guest operating system under z/VM: z/VM reader - refer to Section 20.1.1, "Using the z/VM Reader" for details. You can use the following boot media only if Linux is to run in LPAR mode: SE or HMC through a remote FTP server - refer to Section 20.2.1, "Using an FTP Server" for details. SE or HMC DVD - refer to Section 20.2.2, "Using the HMC or SE DVD Drive" for details You can use the following boot media for both z/VM and LPAR: DASD - refer to Section 20.1.2, "Using a Prepared DASD" for z/VM or Section 20.2.3, "Using a Prepared DASD" for LPAR SCSI device that is attached through an FCP channel - refer to Section 20.1.3, "Using a Prepared FCP-attached SCSI Disk" for z/VM or Section 20.2.4, "Using a Prepared FCP-attached SCSI Disk" for LPAR FCP-attached SCSI DVD - refer to Section 20.1.4, " Using an FCP-attached SCSI DVD Drive" for z/VM or Section 20.2.5, "Using an FCP-attached SCSI DVD Drive" for LPAR If you use DASD and FCP-attached SCSI devices (except SCSI DVDs) as boot media, you must have a configured zipl boot loader. For more information, see the Chapter on zipl in Linux on System z Device Drivers, Features, and Commands on Red Hat Enterprise Linux 6 .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/installation_procedure_overview-s390
|
Chapter 12. Enabling Chinese, Japanese, or Korean text input
|
Chapter 12. Enabling Chinese, Japanese, or Korean text input If you write with Chinese, Japanese, or Korean characters, you can configure RHEL to input text in your language. 12.1. Input methods Certain scripts, such as Chinese, Japanese, or Korean, require keyboard input to go through an Input Method Engine (IME) to enter native text. An input method is a set of conversion rules between the text input and the selected script. An IME is a software that performs the input conversion specified by the input method. To input text in these scripts, you must set up an IME. If you installed the system in your native language and selected your language at the GNOME Initial Setup screen, the input method for your language is enabled by default. 12.2. Available input method engines The following input method engines (IMEs) are available on RHEL from the listed packages: Table 12.1. Available input method engines Languages Scripts IME name Package Chinese Simplified Chinese Intelligent Pinyin ibus-libpinyin Chinese Traditional Chinese New Zhuyin ibus-libzhuyin Japanese Kanji, Hiragana, Katakana Kana Kanji ibus-kkc Korean Hangul Hangul ibus-hangul Other Various M17N ibus-m17n 12.3. Installing input method engines This procedure installs input method engines (IMEs) that you can use to input Chinese, Japanese, and Korean text. Procedure Install all available input method packages: 12.4. Switching the input method in GNOME This procedure sets up the input method for your script, such as for Chinese, Japanese, or Korean scripts. Prerequisites The input method packages are installed. Procedure Go to the system menu , which is accessible from the top-right screen corner, and click Settings . Select the Region & Language section. In the Input Sources list, review the currently enabled input methods. If your input method is missing: Click the + button under the Input Sources list. Select your language. Note If you cannot find your language in the menu, click the three dots icon ( More... ) at the end of the menu. Select the input method that you want to use. A cog wheel icon marks all input methods to distinguish them from simple keyboard layouts. Confirm your selection by clicking Add . Switch the active input method using one of the following ways: Click the input method indicator on the right side of the top panel and select your input method. Switch between the enabled input methods using the Super + Space keyboard shortcut. Verification Open a text editor. Type text in your language. Verify that the text appears in your native script. 12.5. Additional resources Installing a font for the Chinese standard GB 18030 character set (Red Hat Knowledgebase)
|
[
"yum install @input-methods"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/assembly_enabling-chinese-japanese-or-korean-text-input_using-the-desktop-environment-in-rhel-8
|
Chapter 11. LocalSubjectAccessReview [authorization.k8s.io/v1]
|
Chapter 11. LocalSubjectAccessReview [authorization.k8s.io/v1] Description LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking. Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 11.1.1. .spec Description SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description extra object Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. extra{} array (string) groups array (string) Groups is the groups you're testing for. nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface uid string UID information about the requesting user. user string User is the user you're testing for. If you specify "User" but not "Groups", then is it interpreted as "What if User were not a member of any groups 11.1.2. .spec.extra Description Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. Type object 11.1.3. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 11.1.4. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description fieldSelector object FieldSelectorAttributes indicates a field limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https://www.oxeye.io/resources/golang-parameter-smuggling-attack for more details. For the *SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid. group string Group is the API Group of the Resource. "*" means all. labelSelector object LabelSelectorAttributes indicates a label limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https://www.oxeye.io/resources/golang-parameter-smuggling-attack for more details. For the *SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 11.1.5. .spec.resourceAttributes.fieldSelector Description FieldSelectorAttributes indicates a field limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https://www.oxeye.io/resources/golang-parameter-smuggling-attack for more details. For the *SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid. Type object Property Type Description rawSelector string rawSelector is the serialization of a field selector that would be included in a query parameter. Webhook implementations are encouraged to ignore rawSelector. The kube-apiserver's *SubjectAccessReview will parse the rawSelector as long as the requirements are not present. requirements array (FieldSelectorRequirement) requirements is the parsed interpretation of a field selector. All requirements must be met for a resource instance to match the selector. Webhook implementations should handle requirements, but how to handle them is up to the webhook. Since requirements can only limit the request, it is safe to authorize as unlimited request if the requirements are not understood. 11.1.6. .spec.resourceAttributes.labelSelector Description LabelSelectorAttributes indicates a label limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https://www.oxeye.io/resources/golang-parameter-smuggling-attack for more details. For the *SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid. Type object Property Type Description rawSelector string rawSelector is the serialization of a field selector that would be included in a query parameter. Webhook implementations are encouraged to ignore rawSelector. The kube-apiserver's *SubjectAccessReview will parse the rawSelector as long as the requirements are not present. requirements array (LabelSelectorRequirement) requirements is the parsed interpretation of a label selector. All requirements must be met for a resource instance to match the selector. Webhook implementations should handle requirements, but how to handle them is up to the webhook. Since requirements can only limit the request, it is safe to authorize as unlimited request if the requirements are not understood. 11.1.7. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 11.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/namespaces/{namespace}/localsubjectaccessreviews POST : create a LocalSubjectAccessReview 11.2.1. /apis/authorization.k8s.io/v1/namespaces/{namespace}/localsubjectaccessreviews Table 11.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a LocalSubjectAccessReview Table 11.2. Body parameters Parameter Type Description body LocalSubjectAccessReview schema Table 11.3. HTTP responses HTTP code Reponse body 200 - OK LocalSubjectAccessReview schema 201 - Created LocalSubjectAccessReview schema 202 - Accepted LocalSubjectAccessReview schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authorization_apis/localsubjectaccessreview-authorization-k8s-io-v1
|
10.5. Setting Ethers Information for a Host
|
10.5. Setting Ethers Information for a Host NIS can host an ethers table which can be used manage DHCP configuration files for systems based on their platform, operating system, DNS domain, and MAC address - all information stored in host entries in IdM. In Identity Management, each system is created with a corresponding ethers entry in the directory, in the ou=ethers subtree. This entry is used to create a NIS map for the ethers service which can be managed by the NIS compatibility plug-in in IdM. To configure NIS maps for ethers entries: Add the MAC address attribute to a host entry. For example: Open the nsswitch.conf file. Add a line for the ethers service, and set it to use LDAP for its lookup. Check that the ethers information is available for the client.
|
[
"cn=server,ou=ethers,dc=example,dc=com",
"[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa host-mod --macaddress=12:34:56:78:9A:BC server.example.com",
"ethers: ldap",
"getnt ethers server.example.com"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/host-ethers
|
Chapter 7. RangeAllocation [security.openshift.io/v1]
|
Chapter 7. RangeAllocation [security.openshift.io/v1] Description RangeAllocation is used so we can easily expose a RangeAllocation typed for security group Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required range data 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data string data is a byte array representing the serialized state of a range allocation. It is a bitmap with each bit set to one to represent a range is taken. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata range string range is a string representing a unique label for a range of uids, "1000000000-2000000000/10000". 7.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/rangeallocations DELETE : delete collection of RangeAllocation GET : list or watch objects of kind RangeAllocation POST : create a RangeAllocation /apis/security.openshift.io/v1/watch/rangeallocations GET : watch individual changes to a list of RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead. /apis/security.openshift.io/v1/rangeallocations/{name} DELETE : delete a RangeAllocation GET : read the specified RangeAllocation PATCH : partially update the specified RangeAllocation PUT : replace the specified RangeAllocation /apis/security.openshift.io/v1/watch/rangeallocations/{name} GET : watch changes to an object of kind RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 7.2.1. /apis/security.openshift.io/v1/rangeallocations HTTP method DELETE Description delete collection of RangeAllocation Table 7.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.2. HTTP responses HTTP code Reponse body 200 - OK Status_v8 schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind RangeAllocation Table 7.3. HTTP responses HTTP code Reponse body 200 - OK RangeAllocationList schema 401 - Unauthorized Empty HTTP method POST Description create a RangeAllocation Table 7.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.5. Body parameters Parameter Type Description body RangeAllocation schema Table 7.6. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 202 - Accepted RangeAllocation schema 401 - Unauthorized Empty 7.2.2. /apis/security.openshift.io/v1/watch/rangeallocations HTTP method GET Description watch individual changes to a list of RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead. Table 7.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/security.openshift.io/v1/rangeallocations/{name} Table 7.8. Global path parameters Parameter Type Description name string name of the RangeAllocation HTTP method DELETE Description delete a RangeAllocation Table 7.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.10. HTTP responses HTTP code Reponse body 200 - OK Status_v8 schema 202 - Accepted Status_v8 schema 401 - Unauthorized Empty HTTP method GET Description read the specified RangeAllocation Table 7.11. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RangeAllocation Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RangeAllocation Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.15. Body parameters Parameter Type Description body RangeAllocation schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 401 - Unauthorized Empty 7.2.4. /apis/security.openshift.io/v1/watch/rangeallocations/{name} Table 7.17. Global path parameters Parameter Type Description name string name of the RangeAllocation HTTP method GET Description watch changes to an object of kind RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_apis/rangeallocation-security-openshift-io-v1
|
5.3. Planning Security Domains
|
5.3. Planning Security Domains A security domain is a registry of PKI services. PKI services, such as CAs, register information about themselves in these domains so users of PKI services can find other services by inspecting the registry. The security domain service in Certificate System manages both the registration of PKI services for Certificate System subsystems and a set of shared trust policies. The registry provides a complete view of all PKI services provided by the subsystems within that domain. Each Certificate System subsystem must be either a host or a member of a security domain. A CA subsystem is the only subsystem which can host a security domain. The security domain shares the CA internal database for privileged user and group information to determine which users can update the security domain, register new PKI services, and issue certificates. A security domain is created during CA configuration, which automatically creates an entry in the security domain CA's LDAP directory. Each entry contains all the important information about the domain. Every subsystem within the domain, including the CA registering the security domain, is recorded under the security domain container entry. The URL to the CA uniquely identifies the security domain. The security domain is also given a friendly name, such as Example Corp Intranet PKI . All other subsystems - KRA, TPS, TKS, OCSP, and other CAs - must become members of the security domain by supplying the security domain URL when configuring the subsystem. Each subsystem within the security domain shares the same trust policies and trusted roots which can be retrieved from different servers and browsers. The information available in the security domain is used during configuration of a new subsystem, which makes the configuration process streamlined and automated. For example, when a TPS needs to connect to a CA, it can consult the security domain to get a list of available CAs. Each CA has its own LDAP entry. The security domain is an organizational group underneath that CA entry: Then there is a list of each subsystem type beneath the security domain organizational group, with a special object class ( pkiSecurityGroup ) to identify the group type: Each subsystem instance is then stored as a member of that group, with a special pkiSubsystem object class to identify the entry type: If a subsystem needs to contact another subsystem to perform an operation, it contacts the CA which hosts the security domain (by invoking a servlet which connects over the administrative port of the CA). The security domain CA then retrieves the information about the subsystem from its LDAP database, and returns that information to the requesting subsystem. The subsystem authenticates to the security domain using a subsystem certificate. Consider the following when planning the security domain: The CA hosting the security domain can be signed by an external authority. Multiple security domains can be set up within an organization. However, each subsystem can belong to only one security domain. Subsystems within a domain can be cloned. Cloning subsystem instances distributes the system load and provides failover points. The security domain streamlines configuration between the CA and KRA; the KRA can push its KRA connector information and transport certificates automatically to the CA instead of administrators having to manually copy the certificates over to the CA. The Certificate System security domain allows an offline CA to be set up. In this scenario, the offline root has its own security domain. All online subordinate CAs belong to a different security domain. The security domain streamlines configuration between the CA and OCSP. The OCSP can push its information to the CA for the CA to set up OCSP publishing and also retrieve the CA certificate chain from the CA and store it in the internal database.
|
[
"ou=Security Domain,dc=server.example.com-pki-ca",
"cn=KRAList,ou=Security Domain,o=pki-tomcat-CA objectClass: top objectClass: pkiSecurityGroup cn: KRAList",
"dn: cn=kra.example.com:8443,cn=KRAList,ou=Security Domain,o=pki-tomcat-CA objectClass: top objectClass: pkiSubsystem cn: kra.example.com:8443 host: server.example.com UnSecurePort: 8080 SecurePort: 8443 SecureAdminPort: 8443 SecureAgentPort: 8443 SecureEEClientAuthPort: 8443 DomainManager: false Clone: false SubsystemName: KRA kra.example.com 8443"
] |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/certificate_manager-security_domains
|
Chapter 16. Using the Red Hat Marketplace
|
Chapter 16. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace that makes it easy to discover and access certified software for container-based environments that run on public clouds and on-premises. 16.1. Red Hat Marketplace features Cluster administrators can use the Red Hat Marketplace to manage software on OpenShift Container Platform, give developers self-service access to deploy application instances, and correlate application usage against a quota. 16.1.1. Connect OpenShift Container Platform clusters to the Marketplace Cluster administrators can install a common set of applications on OpenShift Container Platform clusters that connect to the Marketplace. They can also use the Marketplace to track cluster usage against subscriptions or quotas. Users that they add by using the Marketplace have their product usage tracked and billed to their organization. During the cluster connection process , a Marketplace Operator is installed that updates the image registry secret, manages the catalog, and reports application usage. 16.1.2. Install applications Cluster administrators can install Marketplace applications from within OperatorHub in OpenShift Container Platform, or from the Marketplace web application . You can access installed applications from the web console by clicking Operators > Installed Operators . 16.1.3. Deploy applications from different perspectives You can deploy Marketplace applications from the web console's Administrator and Developer perspectives. The Developer perspective Developers can access newly installed capabilities by using the Developer perspective. For example, after a database Operator is installed, a developer can create an instance from the catalog within their project. Database usage is aggregated and reported to the cluster administrator. This perspective does not include Operator installation and application usage tracking. The Administrator perspective Cluster administrators can access Operator installation and application usage information from the Administrator perspective. They can also launch application instances by browsing custom resource definitions (CRDs) in the Installed Operators list.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/building_applications/red-hat-marketplace
|
Hardware Considerations for Implementing SR-IOV
|
Hardware Considerations for Implementing SR-IOV Red Hat Virtualization 4.3 Hardware considerations for implementing SR-IOV with Red Hat Virtualization Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/hardware_considerations_for_implementing_sr-iov/index
|
Chapter 36. NetworkBaselineService
|
Chapter 36. NetworkBaselineService 36.1. ModifyBaselineStatusForPeers PATCH /v1/networkbaseline/{deploymentId}/peers 36.1.1. Description 36.1.2. Parameters 36.1.2.1. Path Parameters Name Description Required Default Pattern deploymentId X null 36.1.2.2. Body Parameter Name Description Required Default Pattern body V1ModifyBaselineStatusForPeersRequest X 36.1.3. Return Type Object 36.1.4. Content Type application/json 36.1.5. Responses Table 36.1. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 36.1.6. Samples 36.1.7. Common object reference 36.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 36.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 36.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 36.1.7.3. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 36.1.7.4. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 36.1.7.5. V1ModifyBaselineStatusForPeersRequest Field Name Required Nullable Type Description Format deploymentId String peers List of V1NetworkBaselinePeerStatus 36.1.7.6. V1NetworkBaselinePeerEntity Field Name Required Nullable Type Description Format id String type StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, 36.1.7.7. V1NetworkBaselinePeerStatus Field Name Required Nullable Type Description Format peer V1NetworkBaselineStatusPeer status V1NetworkBaselinePeerStatusStatus BASELINE, ANOMALOUS, 36.1.7.8. V1NetworkBaselinePeerStatusStatus Enum Values BASELINE ANOMALOUS 36.1.7.9. V1NetworkBaselineStatusPeer Field Name Required Nullable Type Description Format entity V1NetworkBaselinePeerEntity port Long The port and protocol of the destination of the given connection. int64 protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, ingress Boolean A boolean representing whether the query is for an ingress or egress connection. This is defined with respect to the current deployment. Thus: - If the connection in question is in the outEdges of the current deployment, this should be false. - If it is in the outEdges of the peer deployment, this should be true. 36.2. GetNetworkBaselineStatusForFlows POST /v1/networkbaseline/{deploymentId}/status 36.2.1. Description 36.2.2. Parameters 36.2.2.1. Path Parameters Name Description Required Default Pattern deploymentId X null 36.2.2.2. Body Parameter Name Description Required Default Pattern body V1NetworkBaselineStatusRequest X 36.2.3. Return Type V1NetworkBaselineStatusResponse 36.2.4. Content Type application/json 36.2.5. Responses Table 36.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1NetworkBaselineStatusResponse 0 An unexpected error response. RuntimeError 36.2.6. Samples 36.2.7. Common object reference 36.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 36.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 36.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 36.2.7.3. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 36.2.7.4. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 36.2.7.5. V1NetworkBaselinePeerEntity Field Name Required Nullable Type Description Format id String type StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, 36.2.7.6. V1NetworkBaselinePeerStatus Field Name Required Nullable Type Description Format peer V1NetworkBaselineStatusPeer status V1NetworkBaselinePeerStatusStatus BASELINE, ANOMALOUS, 36.2.7.7. V1NetworkBaselinePeerStatusStatus Enum Values BASELINE ANOMALOUS 36.2.7.8. V1NetworkBaselineStatusPeer Field Name Required Nullable Type Description Format entity V1NetworkBaselinePeerEntity port Long The port and protocol of the destination of the given connection. int64 protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, ingress Boolean A boolean representing whether the query is for an ingress or egress connection. This is defined with respect to the current deployment. Thus: - If the connection in question is in the outEdges of the current deployment, this should be false. - If it is in the outEdges of the peer deployment, this should be true. 36.2.7.9. V1NetworkBaselineStatusRequest Field Name Required Nullable Type Description Format deploymentId String peers List of V1NetworkBaselineStatusPeer 36.2.7.10. V1NetworkBaselineStatusResponse Field Name Required Nullable Type Description Format statuses List of V1NetworkBaselinePeerStatus 36.3. GetNetworkBaseline GET /v1/networkbaseline/{id} 36.3.1. Description 36.3.2. Parameters 36.3.2.1. Path Parameters Name Description Required Default Pattern id X null 36.3.3. Return Type StorageNetworkBaseline 36.3.4. Content Type application/json 36.3.5. Responses Table 36.3. HTTP Response Codes Code Message Datatype 200 A successful response. StorageNetworkBaseline 0 An unexpected error response. RuntimeError 36.3.6. Samples 36.3.7. Common object reference 36.3.7.1. DeploymentListenPort Field Name Required Nullable Type Description Format port Long int64 l4protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, 36.3.7.2. NetworkEntityInfoExternalSource Update normalizeDupNameExtSrcs(... ) in central/networkgraph/aggregator/aggregator.go whenever this message is updated. Field Name Required Nullable Type Description Format name String cidr String default Boolean default indicates whether the external source is user-generated or system-generated. 36.3.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 36.3.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 36.3.7.4. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 36.3.7.5. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 36.3.7.6. StorageNetworkBaseline Field Name Required Nullable Type Description Format deploymentId String This is the ID of the baseline. clusterId String namespace String peers List of StorageNetworkBaselinePeer forbiddenPeers List of StorageNetworkBaselinePeer A list of peers that will never be added to the baseline. For now, this contains peers that the user has manually removed. This is used to ensure we don't add it back in the event we see the flow again. observationPeriodEnd Date date-time locked Boolean deploymentName String 36.3.7.7. StorageNetworkBaselineConnectionProperties Field Name Required Nullable Type Description Format ingress Boolean port Long int64 protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, 36.3.7.8. StorageNetworkBaselinePeer Field Name Required Nullable Type Description Format entity StorageNetworkEntity properties List of StorageNetworkBaselineConnectionProperties 36.3.7.9. StorageNetworkEntity Field Name Required Nullable Type Description Format info StorageNetworkEntityInfo scope StorageNetworkEntityScope 36.3.7.10. StorageNetworkEntityInfo Field Name Required Nullable Type Description Format type StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, id String deployment StorageNetworkEntityInfoDeployment externalSource NetworkEntityInfoExternalSource 36.3.7.11. StorageNetworkEntityInfoDeployment Field Name Required Nullable Type Description Format name String namespace String cluster String listenPorts List of DeploymentListenPort 36.3.7.12. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 36.3.7.13. StorageNetworkEntityScope Field Name Required Nullable Type Description Format clusterId String 36.4. LockNetworkBaseline PATCH /v1/networkbaseline/{id}/lock 36.4.1. Description 36.4.2. Parameters 36.4.2.1. Path Parameters Name Description Required Default Pattern id X null 36.4.2.2. Body Parameter Name Description Required Default Pattern body V1ResourceByID X 36.4.3. Return Type Object 36.4.4. Content Type application/json 36.4.5. Responses Table 36.4. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 36.4.6. Samples 36.4.7. Common object reference 36.4.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 36.4.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 36.4.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 36.4.7.3. V1ResourceByID Field Name Required Nullable Type Description Format id String 36.5. UnlockNetworkBaseline PATCH /v1/networkbaseline/{id}/unlock 36.5.1. Description 36.5.2. Parameters 36.5.2.1. Path Parameters Name Description Required Default Pattern id X null 36.5.2.2. Body Parameter Name Description Required Default Pattern body V1ResourceByID X 36.5.3. Return Type Object 36.5.4. Content Type application/json 36.5.5. Responses Table 36.5. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 36.5.6. Samples 36.5.7. Common object reference 36.5.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 36.5.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 36.5.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 36.5.7.3. V1ResourceByID Field Name Required Nullable Type Description Format id String
|
[
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Status of this peer connection. As of now we only have two statuses: - BASELINE: the connection is in the current deployment baseline - ANOMALOUS: the connection is not recognized by the current deployment baseline",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Status of this peer connection. As of now we only have two statuses: - BASELINE: the connection is in the current deployment baseline - ANOMALOUS: the connection is not recognized by the current deployment baseline",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"NetworkBaseline represents a network baseline of a deployment. It contains all the baseline peers and their respective connections. next available tag: 8",
"NetworkBaselineConnectionProperties represents information about a baseline connection next available tag: 4",
"NetworkBaselinePeer represents a baseline peer. next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/networkbaselineservice
|
Chapter 58. JSONPath
|
Chapter 58. JSONPath Camel supports JSONPath to allow using Expression or Predicate on JSON messages. 58.1. Dependencies When using jsonpath with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jsonpath-starter</artifactId> </dependency> 58.2. JSONPath Options The JSONPath language supports 8 options, which are listed below. Name Default Java Type Description resultType String Sets the class name of the result type (type from output). suppressExceptions Boolean Whether to suppress exceptions such as PathNotFoundException. allowSimple Boolean Whether to allow in inlined Simple exceptions in the JSONPath expression. allowEasyPredicate Boolean Whether to allow using the easy predicate parser to pre-parse predicates. writeAsString Boolean Whether to write the output of each row/element as a JSON String value instead of a Map/POJO value. headerName String Name of header to use as input, instead of the message body. option Enum To configure additional options on JSONPath. Multiple values can be separated by comma. Enum values: DEFAULT_PATH_LEAF_TO_NULL ALWAYS_RETURN_LIST AS_PATH_LIST SUPPRESS_EXCEPTIONS REQUIRE_PROPERTIES trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 58.3. Examples For example, you can use JSONPath in a Predicate with the Content Based Router EIP . from("queue:books.new") .choice() .when().jsonpath("USD.store.book[?(@.price < 10)]") .to("jms:queue:book.cheap") .when().jsonpath("USD.store.book[?(@.price < 30)]") .to("jms:queue:book.average") .otherwise() .to("jms:queue:book.expensive"); And in XML DSL: <route> <from uri="direct:start"/> <choice> <when> <jsonpath>USD.store.book[?(@.price < 10)]</jsonpath> <to uri="mock:cheap"/> </when> <when> <jsonpath>USD.store.book[?(@.price < 30)]</jsonpath> <to uri="mock:average"/> </when> <otherwise> <to uri="mock:expensive"/> </otherwise> </choice> </route> 58.4. JSONPath Syntax Using the JSONPath syntax takes some time to learn, even for basic predicates. So for example to find out all the cheap books you have to do: USD.store.book[?(@.price < 20)] 58.4.1. Easy JSONPath Syntax However, what if you could just write it as: store.book.price < 20 And you can omit the path if you just want to look at nodes with a price key: price < 20 To support this there is a EasyPredicateParser which kicks-in if you have defined the predicate using a basic style. That means the predicate must not start with the USD sign, and only include one operator. The easy syntax is: left OP right You can use Camel simple language in the right operator, eg: store.book.price < USD{header.limit} See the JSONPath project page for more syntax examples. 58.5. Supported message body types Camel JSonPath supports message body using the following types: Type Comment File Reading from files String Plain strings Map Message bodies as java.util.Map types List Message bodies as java.util.List types POJO Optional If Jackson is on the classpath, then camel-jsonpath is able to use Jackson to read the message body as POJO and convert to java.util.Map which is supported by JSonPath. For example, you can add camel-jackson as dependency to include Jackson. InputStream If none of the above types matches, then Camel will attempt to read the message body as a java.io.InputStream . If a message body is of unsupported type then an exception is thrown by default, however you can configure JSonPath to suppress exceptions (see below) 58.6. Suppressing exceptions By default, jsonpath will throw an exception if the json payload does not have a valid path accordingly to the configured jsonpath expression. In some use-cases you may want to ignore this in case the json payload contains optional data. Therefore, you can set the option suppressExceptions to true to ignore this as shown: from("direct:start") .choice() // use true to suppress exceptions .when().jsonpath("person.middlename", true) .to("mock:middle") .otherwise() .to("mock:other"); And in XML DSL: <route> <from uri="direct:start"/> <choice> <when> <jsonpath suppressExceptions="true">person.middlename</jsonpath> <to uri="mock:middle"/> </when> <otherwise> <to uri="mock:other"/> </otherwise> </choice> </route> This option is also available on the @JsonPath annotation. 58.7. Inline Simple expressions It's possible to inlined Simple language in the JSONPath expression using the simple syntax USD{xxx} . An example is shown below: from("direct:start") .choice() .when().jsonpath("USD.store.book[?(@.price < USD{header.cheap})]") .to("mock:cheap") .when().jsonpath("USD.store.book[?(@.price < USD{header.average})]") .to("mock:average") .otherwise() .to("mock:expensive"); And in XML DSL: <route> <from uri="direct:start"/> <choice> <when> <jsonpath>USD.store.book[?(@.price < USD{header.cheap})]</jsonpath> <to uri="mock:cheap"/> </when> <when> <jsonpath>USD.store.book[?(@.price < USD{header.average})]</jsonpath> <to uri="mock:average"/> </when> <otherwise> <to uri="mock:expensive"/> </otherwise> </choice> </route> You can turn off support for inlined Simple expression by setting the option allowSimple to false as shown: .when().jsonpath("USD.store.book[?(@.price < 10)]", false, false) And in XML DSL: <jsonpath allowSimple="false">USD.store.book[?(@.price < 10)]</jsonpath> 58.8. JSonPath injection You can use Bean Integration to invoke a method on a bean and use various languages such as JSONPath (via the @JsonPath annotation) to extract a value from the message and bind it to a method parameter, as shown below: public class Foo { @Consume("activemq:queue:books.new") public void doSomething(@JsonPath("USD.store.book[*].author") String author, @Body String json) { // process the inbound message here } } 58.9. Encoding Detection The encoding of the JSON document is detected automatically, if the document is encoded in unicode (UTF-8, UTF-16LE, UTF-16BE, UTF-32LE, UTF-32BE ) as specified in RFC-4627. If the encoding is a non-unicode encoding, you can either make sure that you enter the document in String format to JSONPath, or you can specify the encoding in the header CamelJsonPathJsonEncoding which is defined as a constant in: JsonpathConstants.HEADER_JSON_ENCODING . 58.10. Split JSON data into sub rows as JSON You can use JSONPath to split a JSON document, such as: from("direct:start") .split().jsonpath("USD.store.book[*]") .to("log:book"); Then each book is logged, however the message body is a Map instance. Sometimes you may want to output this as plain String JSON value instead, which can be done with the writeAsString option as shown: from("direct:start") .split().jsonpathWriteAsString("USD.store.book[*]") .to("log:book"); Then each book is logged as a String JSON value. 58.11. Using header as input By default, JSONPath uses the message body as the input source. However, you can also use a header as input by specifying the headerName option. For example to count the number of books from a JSON document that was stored in a header named books you can do: from("direct:start") .setHeader("numberOfBooks") .jsonpath("USD..store.book.length()", false, int.class, "books") .to("mock:result"); In the jsonpath expression above we specify the name of the header as books , and we also told that we wanted the result to be converted as an integer by int.class . The same example in XML DSL would be: <route> <from uri="direct:start"/> <setHeader name="numberOfBooks"> <jsonpath headerName="books" resultType="int">USD..store.book.length()</jsonpath> </setHeader> <to uri="mock:result"/> </route> 58.12. Spring Boot Auto-Configuration The component supports 8 options, which are listed below. Name Description Default Type camel.language.jsonpath.allow-easy-predicate Whether to allow using the easy predicate parser to pre-parse predicates. true Boolean camel.language.jsonpath.allow-simple Whether to allow in inlined Simple exceptions in the JSONPath expression. true Boolean camel.language.jsonpath.enabled Whether to enable auto configuration of the jsonpath language. This is enabled by default. Boolean camel.language.jsonpath.header-name Name of header to use as input, instead of the message body. String camel.language.jsonpath.option To configure additional options on JSONPath. Multiple values can be separated by comma. String camel.language.jsonpath.suppress-exceptions Whether to suppress exceptions such as PathNotFoundException. false Boolean camel.language.jsonpath.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.jsonpath.write-as-string Whether to write the output of each row/element as a JSON String value instead of a Map/POJO value. false Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jsonpath-starter</artifactId> </dependency>",
"from(\"queue:books.new\") .choice() .when().jsonpath(\"USD.store.book[?(@.price < 10)]\") .to(\"jms:queue:book.cheap\") .when().jsonpath(\"USD.store.book[?(@.price < 30)]\") .to(\"jms:queue:book.average\") .otherwise() .to(\"jms:queue:book.expensive\");",
"<route> <from uri=\"direct:start\"/> <choice> <when> <jsonpath>USD.store.book[?(@.price < 10)]</jsonpath> <to uri=\"mock:cheap\"/> </when> <when> <jsonpath>USD.store.book[?(@.price < 30)]</jsonpath> <to uri=\"mock:average\"/> </when> <otherwise> <to uri=\"mock:expensive\"/> </otherwise> </choice> </route>",
"USD.store.book[?(@.price < 20)]",
"store.book.price < 20",
"price < 20",
"left OP right",
"store.book.price < USD{header.limit}",
"from(\"direct:start\") .choice() // use true to suppress exceptions .when().jsonpath(\"person.middlename\", true) .to(\"mock:middle\") .otherwise() .to(\"mock:other\");",
"<route> <from uri=\"direct:start\"/> <choice> <when> <jsonpath suppressExceptions=\"true\">person.middlename</jsonpath> <to uri=\"mock:middle\"/> </when> <otherwise> <to uri=\"mock:other\"/> </otherwise> </choice> </route>",
"from(\"direct:start\") .choice() .when().jsonpath(\"USD.store.book[?(@.price < USD{header.cheap})]\") .to(\"mock:cheap\") .when().jsonpath(\"USD.store.book[?(@.price < USD{header.average})]\") .to(\"mock:average\") .otherwise() .to(\"mock:expensive\");",
"<route> <from uri=\"direct:start\"/> <choice> <when> <jsonpath>USD.store.book[?(@.price < USD{header.cheap})]</jsonpath> <to uri=\"mock:cheap\"/> </when> <when> <jsonpath>USD.store.book[?(@.price < USD{header.average})]</jsonpath> <to uri=\"mock:average\"/> </when> <otherwise> <to uri=\"mock:expensive\"/> </otherwise> </choice> </route>",
".when().jsonpath(\"USD.store.book[?(@.price < 10)]\", false, false)",
"<jsonpath allowSimple=\"false\">USD.store.book[?(@.price < 10)]</jsonpath>",
"public class Foo { @Consume(\"activemq:queue:books.new\") public void doSomething(@JsonPath(\"USD.store.book[*].author\") String author, @Body String json) { // process the inbound message here } }",
"from(\"direct:start\") .split().jsonpath(\"USD.store.book[*]\") .to(\"log:book\");",
"from(\"direct:start\") .split().jsonpathWriteAsString(\"USD.store.book[*]\") .to(\"log:book\");",
"from(\"direct:start\") .setHeader(\"numberOfBooks\") .jsonpath(\"USD..store.book.length()\", false, int.class, \"books\") .to(\"mock:result\");",
"<route> <from uri=\"direct:start\"/> <setHeader name=\"numberOfBooks\"> <jsonpath headerName=\"books\" resultType=\"int\">USD..store.book.length()</jsonpath> </setHeader> <to uri=\"mock:result\"/> </route>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jsonpath-language-starter
|
Chapter 6. Installing a cluster on Azure with network customizations
|
Chapter 6. Installing a cluster on Azure with network customizations In OpenShift Container Platform version 4.16, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 6.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.5.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 6.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 6.5.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 6.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 6.5.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 6.5.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 6.5.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19 1 10 15 17 Required. The installation program prompts you for this value. 2 6 11 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 14 Specify the name of the resource group that contains the DNS zone for your base domain. 16 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 18 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 19 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.5.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.6. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 6.7. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 6.8. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 6.8.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 6.2. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 6.3. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 6.4. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 6.5. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 6.6. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 6.7. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 6.8. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 6.9. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 6.10. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 6.11. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 6.12. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 6.9. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Note For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 6.10. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.11. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 6.11.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 6.11.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 6.11.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 6.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.11.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 6.11.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 6.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"az login",
"ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7",
"ls <path_to_ccoctl_output_dir>/manifests",
"azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure/installing-azure-network-customizations
|
Chapter 6. Managing database replication with Galera
|
Chapter 6. Managing database replication with Galera Red Hat OpenStack Platform uses the MariaDB Galera Cluster to manage database replication. Pacemaker runs the Galera service as a bundle set resource that manages the database master/slave status. You can use Galera to test and verify different aspects of the database cluster, such as hostname resolution, cluster integrity, node integrity, and database replication performance. When you investigate database cluster integrity, each node must meet the following criteria: The node is a part of the correct cluster. The node can write to the cluster. The node can receive queries and write commands from the cluster. The node is connected to other nodes in the cluster. The node is replicating write-sets to tables in the local database. 6.1. Verifying hostname resolution in a MariaDB cluster To troubleshoot the MariaDB Galera cluster, first eliminate any hostname resolution problems and then check the write-set replication status on the database of each Controller node. To access the MySQL database, use the password set by director during the overcloud deployment. By default, director binds the Galera resource to a hostname instead of an IP address. Therefore, any problems that prevent hostname resolution, such as misconfigured or failed DNS, might cause Pacemaker to incorrectly manage the Galera resource. Procedure From a Controller node, get the MariaDB database root password by running the hiera command. Get the name of the MariaDB container that runs on the node. Get the write-set replication information from the MariaDB database on each node. Each relevant variable uses the prefix wsrep . Verify the health and integrity of the MariaDB Galera cluster by checking that the cluster is reporting the correct number of nodes. 6.2. Checking MariaDB cluster integrity To investigate problems with the MariaDB Galera Cluster, check the integrity of the whole cluster by checking specific wsrep database variables on each Controller node. Procedure Run the following command and replace <variable> with the wsrep database variable that you want to check: The following example shows how to view the cluster state UUID of the node: The following table lists the wsrep database variables that you can use to check cluster integrity. Table 6.1. Database variables to check for cluster integrity Variable Summary Description wsrep_cluster_state_uuid Cluster state UUID ID of the cluster to which the node belongs. All nodes must have an identical cluster ID. A node with a different ID is not connected to the cluster. wsrep_cluster_size Number of nodes in the cluster You can check this on any node. If the value is less than the actual number of nodes, then some nodes either failed or lost connectivity. wsrep_cluster_conf_id Total number of cluster changes Determines whether the cluster was split to several components, or partitions. Partitioning is usually caused by a network failure. All nodes must have an identical value. In case some nodes report a different wsrep_cluster_conf_id , check the wsrep_cluster_status value to see if the nodes can still write to the cluster ( Primary ). wsrep_cluster_status Primary component status Determines whether the node can write to the cluster. If the node can write to the cluster, the wsrep_cluster_status value is Primary . Any other value indicates that the node is part of a non-operational partition. 6.3. Checking database node integrity in a MariaDB cluster To investigate problems with a specific Controller node in the MariaDB Galera Cluster, check the integrity of the node by checking specific wsrep database variables. Procedure Run the following command and replace <variable> with the wsrep database variable that you want to check: The following table lists the wsrep database variables that you can use to check node integrity. Table 6.2. Database variables to check for node integrity Variable Summary Description wsrep_ready Node ability to accept queries States whether the node can accept write-sets from the cluster. If so, then wsrep_ready is ON . wsrep_connected Node network connectivity States whether the node can connect to other nodes on the network. If so, then wsrep_connected is ON . wsrep_local_state_comment Node state Summarizes the node state. If the node can write to the cluster, then typical values for wsrep_local_state_comment can be Joining , Waiting on SST , Joined , Synced , or Donor . If the node is part of a non-operational component, then the value of wsrep_local_state_comment is Initialized . Note The wsrep_connected value can be ON even if the node is connected only to a subset of nodes in the cluster. For example, in case of a cluster partition, the node might be part of a component that cannot write to the cluster. For more information about checking cluster integrity, see Section 6.2, "Checking MariaDB cluster integrity" . If the wsrep_connected value is OFF , then the node is not connected to any cluster components. 6.4. Testing database replication performance in a MariaDB cluster To check the performance of the MariaDB Galera Cluster, run benchmark tests on the replication throughput of the cluster by checking specific wsrep database variables. Every time you query one of these variables, a FLUSH STATUS command resets the variable value. To run benchmark tests, you must run multiple queries and analyze the variances. These variances can help you determine how much Flow Control is affecting the cluster performance. Flow Control is a mechanism that the cluster uses to manage replication. When the local receive queue exceeds a certain threshold, Flow Control pauses the replication until the queue size goes down. For more information about Flow Control, see Flow Control on the Galera Cluster website. Procedure Run the following command and replace <variable> with the wsrep database variable that you want to check: The following table lists the wsrep database variables that you can use to test database replication performance. Table 6.3. Database variables to check for database replication performance Variable Summary Usage wsrep_local_recv_queue_avg Average size of the local received write-set queue after the last query. A value higher than 0.0 indicates that the node cannot apply write-sets as quickly as it receives write-sets, which triggers replication throttling. Check wsrep_local_recv_queue_min and wsrep_local_recv_queue_max for a detailed look at this benchmark. wsrep_local_send_queue_avg Average send queue length after the last query. A value higher than 0.0 indicates a higher likelihood of replication throttling and network throughput problems. wsrep_local_recv_queue_min and wsrep_local_recv_queue_max Minimum and maximum size of the local receive queue after the last query. If the value of wsrep_local_recv_queue_avg is higher than 0.0 , you can check these variables to determine the scope of the queue size. wsrep_flow_control_paused Fraction of the time that Flow Control paused the node after the last query. A value higher than 0.0 indicates that Flow Control paused the node. To determine the duration of the pause, multiply the wsrep_flow_control_paused value with the number of seconds between the queries. The optimal value is as close to 0.0 as possible. For example: If the value of wsrep_flow_control_paused is 0.50 one minute after the last query, then Flow Control paused the node for 30 seconds. If the value of wsrep_flow_control_paused is 1.0 one minute after the last query, then Flow Control paused the node for the entire minute. wsrep_cert_deps_distance Average difference between the lowest and highest sequence number ( seqno ) value that can be applied in parallel In case of throttling and pausing, this variable indicates how many write-sets on average can be applied in parallel. Compare the value with the wsrep_slave_threads variable to see how many write-sets can actually be applied simultaneously. wsrep_slave_threads Number of threads that can be applied simultaneously You can increase the value of this variable to apply more threads simultaneously, which also increases the value of wsrep_cert_deps_distance . The value of wsrep_slave_threads must not be higher than the number of CPU cores in the node. For example, if the wsrep_cert_deps_distance value is 20 , you can increase the value of wsrep_slave_threads from 2 to 4 to increase the amount of write-sets that the node can apply. If a problematic node already has an optimal wsrep_slave_threads value, you can exclude the node from the cluster while you investigate possible connectivity issues. 6.5. Additional resources What is MariaDB Galera Cluster?
|
[
"sudo hiera -c /etc/puppet/hiera.yaml \"mysql::server::root_password\" *[MYSQL-HIERA-PASSWORD]*",
"sudo podman ps | grep -i galera a403d96c5026 undercloud.ctlplane.localdomain:8787/rhosp-rhel9/openstack-mariadb:16.0-106 /bin/bash /usr/lo... 3 hours ago Up 3 hours ago galera-bundle-podman-0",
"sudo podman exec galera-bundle-podman-0 sudo mysql -B --password=\"[MYSQL-HIERA-PASSWORD]\" -e \"SHOW GLOBAL STATUS LIKE 'wsrep_%';\" +----------------------------+----------+ | Variable_name | Value | +----------------------------+----------+ | wsrep_applier_thread_count | 1 | | wsrep_apply_oooe | 0.018672 | | wsrep_apply_oool | 0.000630 | | wsrep_apply_window | 1.021942 | | ... | ... | +----------------------------+----------+",
"sudo podman exec galera-bundle-podman-0 sudo mysql -B --password=\"[MYSQL-HIERA-PASSWORD]\" -e \"SHOW GLOBAL STATUS LIKE <variable ;\"",
"sudo podman exec galera-bundle-podman-0 sudo mysql -B --password=\"[MYSQL-HIERA-PASSWORD]\" -e \"SHOW GLOBAL STATUS LIKE 'wsrep_cluster_state_uuid';\" +--------------------------+--------------------------------------+ | Variable_name | Value | +--------------------------+--------------------------------------+ | wsrep_cluster_state_uuid | e2c9a15e-5485-11e0-0800-6bbb637e7211 | +--------------------------+--------------------------------------+",
"sudo podman exec galera-bundle-podman-0 sudo mysql -B --password=\"[MYSQL-HIERA-PASSWORD]\" -e \"SHOW GLOBAL STATUS LIKE <variable> ;\"",
"sudo podman exec galera-bundle-podman-0 sudo mysql -B --password=\"[MYSQL-HIERA-PASSWORD]\" -e \"SHOW STATUS LIKE <variable> ;\""
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/high_availability_deployment_and_usage/assembly_managing-db-replication-with-galera_rhosp
|
Chapter 6. Network Policy
|
Chapter 6. Network Policy As a user with the admin role, you can create a network policy for the netobserv namespace to secure inbound access to the Network Observability Operator. 6.1. Configuring an ingress network policy by using the FlowCollector custom resource You can configure the FlowCollector custom resource (CR) to deploy an ingress network policy for Network Observability by setting the spec.NetworkPolicy.enable specification to true . By default, the specification is false . If you have installed Loki, Kafka or any exporter in a different namespace that also has a network policy, you must ensure that the Network Observability components can communicate with them. Consider the following about your setup: Connection to Loki (as defined in the FlowCollector CR spec.loki parameter) Connection to Kafka (as defined in the FlowCollector CR spec.kafka parameter) Connection to any exporter (as defined in FlowCollector CR spec.exporters parameter) If you are using Loki and including it in the policy target, connection to an external object storage (as defined in your LokiStack related secret) Procedure . In the web console, go to Operators Installed Operators page. Under the Provided APIs heading for Network Observability , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector CR. A sample configuration is as follows: Example FlowCollector CR for network policy apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv networkPolicy: enable: true 1 additionalNamespaces: ["openshift-console", "openshift-monitoring"] 2 # ... 1 By default, the enable value is false . 2 Default values are ["openshift-console", "openshift-monitoring"] . 6.2. Creating a network policy for Network Observability If you want to further customize the network policies for the netobserv and netobserv-privileged namespaces, you must disable the managed installation of the policy from the FlowCollector CR, and create your own. You can use the network policy resources that are enabled from the FlowCollector CR as a starting point for the procedure that follows: Example netobserv network policy apiVersion: networking.k8s.io/v1 kind: NetworkPolicy spec: ingress: - from: - podSelector: {} - namespaceSelector: matchLabels: kubernetes.io/metadata.name: netobserv-privileged - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console ports: - port: 9001 protocol: TCP - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress Example netobserv-privileged network policy apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: netobserv namespace: netobserv-privileged spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress Procedure Navigate to Networking NetworkPolicies . Select the netobserv project from the Project dropdown menu. Name the policy. For this example, the policy name is allow-ingress . Click Add ingress rule three times to create three ingress rules. Specify the following in the form: Make the following specifications for the first Ingress rule : From the Add allowed source dropdown menu, select Allow pods from the same namespace . Make the following specifications for the second Ingress rule : From the Add allowed source dropdown menu, select Allow pods from inside the cluster . Click + Add namespace selector . Add the label, kubernetes.io/metadata.name , and the selector, openshift-console . Make the following specifications for the third Ingress rule : From the Add allowed source dropdown menu, select Allow pods from inside the cluster . Click + Add namespace selector . Add the label, kubernetes.io/metadata.name , and the selector, openshift-monitoring . Verification Navigate to Observe Network Traffic . View the Traffic Flows tab, or any tab, to verify that the data is displayed. Navigate to Observe Dashboards . In the NetObserv/Health selection, verify that the flows are being ingested and sent to Loki, which is represented in the first graph. Additional resources Creating a network policy using the CLI
|
[
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv networkPolicy: enable: true 1 additionalNamespaces: [\"openshift-console\", \"openshift-monitoring\"] 2",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy spec: ingress: - from: - podSelector: {} - namespaceSelector: matchLabels: kubernetes.io/metadata.name: netobserv-privileged - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console ports: - port: 9001 protocol: TCP - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: netobserv namespace: netobserv-privileged spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_observability/network-observability-network-policy
|
Chapter 5. Compliance Operator
|
Chapter 5. Compliance Operator 5.1. Compliance Operator release notes The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. These release notes track the development of the Compliance Operator in the OpenShift Container Platform. For an overview of the Compliance Operator, see Understanding the Compliance Operator . 5.1.1. OpenShift Compliance Operator 0.1.53 The following advisory is available for the OpenShift Compliance Operator 0.1.53: RHBA-2022:5537 - OpenShift Compliance Operator bug fix update 5.1.1.1. Bug fixes Previously, the ocp4-kubelet-enable-streaming-connections rule contained an incorrect variable comparison, resulting in false positive scan results. Now, the Compliance Operator provides accurate scan results when setting streamingConnectionIdleTimeout . ( BZ#2069891 ) Previously, group ownership for /etc/openvswitch/conf.db was incorrect on IBM Z architectures, resulting in ocp4-cis-node-worker-file-groupowner-ovs-conf-db check failures. Now, the check is marked NOT-APPLICABLE on IBM Z architecture systems. ( BZ#2072597 ) Previously, the ocp4-cis-scc-limit-container-allowed-capabilities rule reported in a FAIL state due to incomplete data regarding the security context constraints (SCC) rules in the deployment. Now, the result is MANUAL , which is consistent with other checks that require human intervention. ( BZ#2077916 ) Previously, the following rules failed to account for additional configuration paths for API servers and TLS certificates and keys, resulting in reported failures even if the certificates and keys were set properly: ocp4-cis-api-server-kubelet-client-cert ocp4-cis-api-server-kubelet-client-key ocp4-cis-kubelet-configure-tls-cert ocp4-cis-kubelet-configure-tls-key Now, the rules report accurately and observe legacy file paths specified in the kubelet configuration file. ( BZ#2079813 ) Previously, the content_rule_oauth_or_oauthclient_inactivity_timeout rule did not account for a configurable timeout set by the deployment when assessing compliance for timeouts. This resulted in the rule failing even if the timeout was valid. Now, the Compliance Operator uses the var_oauth_inactivity_timeout variable to set valid timeout length. ( BZ#2081952 ) Previously, the Compliance Operator used administrative permissions on namespaces not labeled appropriately for privileged use, resulting in warning messages regarding pod security-level violations. Now, the Compliance Operator has appropriate namespace labels and permission adjustments to access results without violating permissions. ( BZ#2088202 ) Previously, applying auto remediations for rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern resulted in subsequent failures of those rules in scan results, even though they were remediated. Now, the rules report PASS accurately, even after remediations are applied.( BZ#2094382 ) Previously, the Compliance Operator would fail in a CrashLoopBackoff state because of out-of-memory exceptions. Now, the Compliance Operator is improved to handle large machine configuration data sets in memory and function correctly. ( BZ#2094854 ) 5.1.1.2. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.1.2. OpenShift Compliance Operator 0.1.52 The following advisory is available for the OpenShift Compliance Operator 0.1.52: RHBA-2022:4657 - OpenShift Compliance Operator bug fix update 5.1.2.1. New features and enhancements The FedRAMP high SCAP profile is now available for use in OpenShift Container Platform environments. For more information, See Supported compliance profiles . 5.1.2.2. Bug fixes Previously, the OpenScap container would crash due to a mount permission issue in a security environment where DAC_OVERRIDE capability is dropped. Now, executable mount permissions are applied to all users. ( BZ#2082151 ) Previously, the compliance rule ocp4-configure-network-policies could be configured as MANUAL . Now, compliance rule ocp4-configure-network-policies is set to AUTOMATIC . ( BZ#2072431 ) Previously, the Cluster Autoscaler would fail to scale down because the Compliance Operator scan pods were never removed after a scan. Now, the pods are removed from each node by default unless explicitly saved for debugging purposes. ( BZ#2075029 ) Previously, applying the Compliance Operator to the KubeletConfig would result in the node going into a NotReady state due to unpausing the Machine Config Pools too early. Now, the Machine Config Pools are unpaused appropriately and the node operates correctly. ( BZ#2071854 ) Previously, the Machine Config Operator used base64 instead of url-encoded code in the latest release, causing Compliance Operator remediation to fail. Now, the Compliance Operator checks encoding to handle both base64 and url-encoded Machine Config code and the remediation applies correctly. ( BZ#2082431 ) 5.1.2.3. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.1.3. OpenShift Compliance Operator 0.1.49 The following advisory is available for the OpenShift Compliance Operator 0.1.49: RHBA-2022:1148 - OpenShift Compliance Operator bug fix and enhancement update 5.1.3.1. Bug fixes Previously, the openshift-compliance content did not include platform-specific checks for network types. As a result, OVN- and SDN-specific checks would show as failed instead of not-applicable based on the network configuration. Now, new rules contain platform checks for networking rules, resulting in a more accurate assessment of network-specific checks. ( BZ#1994609 ) Previously, the ocp4-moderate-routes-protected-by-tls rule incorrectly checked TLS settings that results in the rule failing the check, even if the connection secure SSL TLS protocol. Now, the check will properly evaluate TLS settings that are consistent with the networking guidance and profile recommendations. ( BZ#2002695 ) Previously, ocp-cis-configure-network-policies-namespace used pagination when requesting namespaces. This caused the rule to fail because the deployments truncated lists of more than 500 namespaces. Now, the entire namespace list is requested, and the rule for checking configured network policies will work for deployments with more than 500 namespaces. ( BZ#2038909 ) Previously, remediations using the sshd jinja macros were hard-coded to specific sshd configurations. As a result, the configurations were inconsistent with the content the rules were checking for and the check would fail. Now, the sshd configuration is parameterized and the rules apply successfully. ( BZ#2049141 ) Previously, the ocp4-cluster-version-operator-verify-integrity always checked the first entry in the Cluter Version Operator (CVO) history. As a result, the upgrade would fail in situations where subsequent versions of {product-name} would be verified. Now, the compliance check result for ocp4-cluster-version-operator-verify-integrity is able to detect verified versions and is accurate with the CVO history. ( BZ#2053602 ) Previously, the ocp4-api-server-no-adm-ctrl-plugins-disabled rule did not check for a list of empty admission controller plug-ins. As a result, the rule would always fail, even if all admission plug-ins were enabled. Now, more robust checking of the ocp4-api-server-no-adm-ctrl-plugins-disabled rule will accurately pass with all admission controller plug-ins enabled. ( BZ#2058631 ) Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan will schedule appropriately based on platform type and labels and will completely successfully. ( BZ#2056911 ) 5.1.4. OpenShift Compliance Operator 0.1.48 The following advisory is available for the OpenShift Compliance Operator 0.1.48: RHBA-2022:0416 - OpenShift Compliance Operator bug fix and enhancement update 5.1.4.1. Bug fixes Previously, some rules associated with extended Open Vulnerability and Assessment Language (OVAL) definitions had a checkType of None . This was because the Compliance Operator was not processing extended OVAL definitions when parsing rules. With this update, content from extended OVAL definitions is parsed so that these rules now have a checkType of either Node or Platform . ( BZ#2040282 ) Previously, a manually created MachineConfig object for KubeletConfig prevented a KubeletConfig object from being generated for remediation, leaving the remediation in the Pending state. With this release, a KubeletConfig object is created by the remediation, regardless if there is a manually created MachineConfig object for KubeletConfig . As a result, KubeletConfig remediations now work as expected. ( BZ#2040401 ) 5.1.5. OpenShift Compliance Operator 0.1.47 The following advisory is available for the OpenShift Compliance Operator 0.1.47: RHBA-2022:0014 - OpenShift Compliance Operator bug fix and enhancement update 5.1.5.1. New features and enhancements The Compliance Operator now supports the following compliance benchmarks for the Payment Card Industry Data Security Standard (PCI DSS): ocp4-pci-dss ocp4-pci-dss-node Additional rules and remediations for FedRAMP moderate impact level are added to the OCP4-moderate, OCP4-moderate-node, and rhcos4-moderate profiles. Remediations for KubeletConfig are now available in node-level profiles. 5.1.5.2. Bug fixes Previously, if your cluster was running OpenShift Container Platform 4.6 or earlier, remediations for USBGuard-related rules would fail for the moderate profile. This is because the remediations created by the Compliance Operator were based on an older version of USBGuard that did not support drop-in directories. Now, invalid remediations for USBGuard-related rules are not created for clusters running OpenShift Container Platform 4.6. If your cluster is using OpenShift Container Platform 4.6, you must manually create remediations for USBGuard-related rules. Additionally, remediations are created only for rules that satisfy minimum version requirements. ( BZ#1965511 ) Previously, when rendering remediations, the compliance operator would check that the remediation was well-formed by using a regular expression that was too strict. As a result, some remediations, such as those that render sshd_config , would not pass the regular expression check and therefore, were not created. The regular expression was found to be unnecessary and removed. Remediations now render correctly. ( BZ#2033009 ) 5.1.6. OpenShift Compliance Operator 0.1.44 The following advisory is available for the OpenShift Compliance Operator 0.1.44: RHBA-2021:4530 - OpenShift Compliance Operator bug fix and enhancement update 5.1.6.1. New features and enhancements In this release, the strictNodeScan option is now added to the ComplianceScan , ComplianceSuite and ScanSetting CRs. This option defaults to true which matches the behavior, where an error occurred if a scan was not able to be scheduled on a node. Setting the option to false allows the Compliance Operator to be more permissive about scheduling scans. Environments with ephemeral nodes can set the strictNodeScan value to false, which allows a compliance scan to proceed, even if some of the nodes in the cluster are not available for scheduling. You can now customize the node that is used to schedule the result server workload by configuring the nodeSelector and tolerations attributes of the ScanSetting object. These attributes are used to place the ResultServer pod, the pod that is used to mount a PV storage volume and store the raw Asset Reporting Format (ARF) results. Previously, the nodeSelector and the tolerations parameters defaulted to selecting one of the control plane nodes and tolerating the node-role.kubernetes.io/master taint . This did not work in environments where control plane nodes are not permitted to mount PVs. This feature provides a way for you to select the node and tolerate a different taint in those environments. The Compliance Operator can now remediate KubeletConfig objects. A comment containing an error message is now added to help content developers differentiate between objects that do not exist in the cluster versus objects that cannot be fetched. Rule objects now contain two new attributes, checkType and description . These attributes allow you to determine if the rule pertains to a node check or platform check, and also allow you to review what the rule does. This enhancement removes the requirement that you have to extend an existing profile in order to create a tailored profile. This means the extends field in the TailoredProfile CRD is no longer mandatory. You can now select a list of rule objects to create a tailored profile. Note that you must select whether your profile applies to nodes or the platform by setting the compliance.openshift.io/product-type: annotation or by setting the -node suffix for the TailoredProfile CR. In this release, the Compliance Operator is now able to schedule scans on all nodes irrespective of their taints. Previously, the scan pods would only tolerated the node-role.kubernetes.io/master taint , meaning that they would either ran on nodes with no taints or only on nodes with the node-role.kubernetes.io/master taint. In deployments that use custom taints for their nodes, this resulted in the scans not being scheduled on those nodes. Now, the scan pods tolerate all node taints. In this release, the Compliance Operator supports the following North American Electric Reliability Corporation (NERC) security profiles: ocp4-nerc-cip ocp4-nerc-cip-node rhcos4-nerc-cip In this release, the Compliance Operator supports the NIST 800-53 Moderate-Impact Baseline for the Red Hat OpenShift - Node level, ocp4-moderate-node, security profile. 5.1.6.2. Templating and variable use In this release, the remediation template now allows multi-value variables. With this update, the Compliance Operator can change remediations based on variables that are set in the compliance profile. This is useful for remediations that include deployment-specific values such as time outs, NTP server host names, or similar. Additionally, the ComplianceCheckResult objects now use the label compliance.openshift.io/check-has-value that lists the variables a check has used. 5.1.6.3. Bug fixes Previously, while performing a scan, an unexpected termination occurred in one of the scanner containers of the pods. In this release, the Compliance Operator uses the latest OpenSCAP version 1.3.5 to avoid a crash. Previously, using autoReplyRemediations to apply remediations triggered an update of the cluster nodes. This was disruptive if some of the remediations did not include all of the required input variables. Now, if a remediation is missing one or more required input variables, it is assigned a state of NeedsReview . If one or more remediations are in a NeedsReview state, the machine config pool remains paused, and the remediations are not applied until all of the required variables are set. This helps minimize disruption to the nodes. The RBAC Role and Role Binding used for Prometheus metrics are changed to 'ClusterRole' and 'ClusterRoleBinding' to ensure that monitoring works without customization. Previously, if an error occurred while parsing a profile, rules or variables objects were removed and deleted from the profile. Now, if an error occurs during parsing, the profileparser annotates the object with a temporary annotation that prevents the object from being deleted until after parsing completes. ( BZ#1988259 ) Previously, an error occurred if titles or descriptions were missing from a tailored profile. Because the XCCDF standard requires titles and descriptions for tailored profiles, titles and descriptions are now required to be set in TailoredProfile CRs. Previously, when using tailored profiles, TailoredProfile variable values were allowed to be set using only a specific selection set. This restriction is now removed, and TailoredProfile variables can be set to any value. 5.1.7. Release Notes for Compliance Operator 0.1.39 The following advisory is available for the OpenShift Compliance Operator 0.1.39: RHBA-2021:3214 - OpenShift Compliance Operator bug fix and enhancement update 5.1.7.1. New features and enhancements Previously, the Compliance Operator was unable to parse Payment Card Industry Data Security Standard (PCI DSS) references. Now, the Operator can parse compliance content that ships with PCI DSS profiles. Previously, the Compliance Operator was unable to execute rules for AU-5 control in the moderate profile. Now, permission is added to the Operator so that it can read Prometheusrules.monitoring.coreos.com objects and run the rules that cover AU-5 control in the moderate profile. 5.1.8. Additional resources Understanding the Compliance Operator 5.2. Supported compliance profiles There are several profiles available as part of the Compliance Operator (CO) installation. 5.2.1. Compliance profiles The Compliance Operator provides the following compliance profiles: Table 5.1. Supported compliance profiles Profile Profile title Compliance Operator version Industry compliance benchmark Supported architectures ocp4-cis CIS Red Hat OpenShift Container Platform 4 Benchmark 0.1.39+ CIS Benchmarks TM footnote:cisbenchmark[To locate the CIS RedHat OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and type Kubernetes in the search box. Click on Kubernetes and then Download Latest CIS Benchmark , where you can then register to download the benchmark.] x86_64 ppc64le s390x ocp4-cis-node CIS Red Hat OpenShift Container Platform 4 Benchmark 0.1.39+ CIS Benchmarks TM footnote:cisbenchmark[] x86_64 ppc64le s390x ocp4-e8 Australian Cyber Security Centre (ACSC) Essential Eight 0.1.39+ ACSC Hardening Linux Workstations and Servers x86_64 ocp4-moderate NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level 0.1.39+ NIST SP-800-53 Release Search x86_64 rhcos4-e8 Australian Cyber Security Centre (ACSC) Essential Eight 0.1.39+ ACSC Hardening Linux Workstations and Servers x86_64 rhcos4-moderate NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS 0.1.39+ NIST SP-800-53 Release Search x86_64 ocp4-moderate-node NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level 0.1.44+ NIST SP-800-53 Release Search x86_64 ocp4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the Red Hat OpenShift Container Platform - Platform level 0.1.44+ NERC CIP Standards x86_64 ocp4-nerc-cip-node North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the Red Hat OpenShift Container Platform - Node level 0.1.44+ NERC CIP Standards x86_64 rhcos4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for Red Hat Enterprise Linux CoreOS 0.1.44+ NERC CIP Standards x86_64 ocp4-pci-dss PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4 0.1.47+ PCI Security Standards (R) Council Document Library x86_64 ocp4-pci-dss-node PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4 0.1.47+ PCI Security Standards (R) Council Document Library x86_64 ocp4-high NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level 0.1.52+ NIST SP-800-53 Release Search x86_64 ocp4-high-node NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level 0.1.52+ NIST SP-800-53 Release Search x86_64 rhcos4-high NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS 0.1.52+ NIST SP-800-53 Release Search x86_64 5.2.2. Additional resources For more information about viewing the compliance profiles available in your system, see Compliance Operator profiles in Understanding the Compliance Operator. 5.3. Installing the Compliance Operator Before you can use the Compliance Operator, you must ensure it is deployed in the cluster. 5.3.1. Installing the Compliance Operator through the web console Prerequisites You must have admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Compliance Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-compliance namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Compliance Operator is installed in the openshift-compliance namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-compliance project that are reporting issues. Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.3.2. Installing the Compliance Operator using the CLI Prerequisites You must have admin privileges. Procedure Define a Namespace object: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: openshift-compliance Create the Namespace object: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "release-0.1" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription object: USD oc create -f subscription-object.yaml Note If you are setting the global scheduler feature and enable defaultNodeSelector , you must create the namespace manually and update the annotations of the openshift-compliance namespace, or the namespace where the Compliance Operator was installed, with openshift.io/node-selector: "" . This removes the default node selector and prevents deployment failures. Verification Verify the installation succeeded by inspecting the CSV file: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running: USD oc get deploy -n openshift-compliance Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.3.3. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 5.4. Compliance Operator scans The ScanSetting and ScanSettingBinding APIs are recommended to run compliance scans with the Compliance Operator. For more information on these API objects, run: USD oc explain scansettings or USD oc explain scansettingbindings 5.4.1. Running compliance scans You can run a scan using the Center for Internet Security (CIS) profiles. For convenience, the Compliance Operator creates a ScanSetting object with reasonable defaults on startup. This ScanSetting object is named default . Note For all-in-one control plane and worker nodes, the compliance scan runs twice on the worker and control plane nodes. The compliance scan might generate inconsistent scan results. You can avoid inconsistent results by defining only a single role in the ScanSetting object. Procedure Inspect the ScanSetting object by running: USD oc describe scansettings default -n openshift-compliance Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: pvAccessModes: - ReadWriteOnce 1 rotation: 3 2 size: 1Gi 3 roles: - worker 4 - master 5 scanTolerations: 6 default: - operator: Exists schedule: 0 1 * * * 7 1 The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode ReadWriteOnce because the Compliance Operator cannot make any assumptions about the storage classes configured on the cluster. Additionally, ReadWriteOnce access mode is available on most clusters. If you need to fetch the scan results, you can do so by using a helper pod, which also binds the volume. Volumes that use the ReadWriteOnce access mode can be mounted by only one pod at time, so it is important to remember to delete the helper pods. Otherwise, the Compliance Operator will not be able to reuse the volume for subsequent scans. 2 The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated. 3 The Compliance Operator will allocate one GB of storage for the scan results. 4 5 If the scan setting uses any profiles that scan cluster nodes, scan these node roles. 6 The default scan setting object also scans all the nodes. 7 The default scan setting object runs scans at 01:00 each day. As an alternative to the default scan setting, you can use default-auto-apply , which has the following settings: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default-auto-apply namespace: openshift-compliance autoUpdateRemediations: true 1 autoApplyRemediations: true 2 rawResultStorage: pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi schedule: 0 1 * * * roles: - worker - master scanTolerations: default: - operator: Exists 1 2 Setting autoUpdateRemediations and autoApplyRemediations flags to true allows you to easily create ScanSetting objects that auto-remediate without extra steps. Create a ScanSettingBinding object that binds to the default ScanSetting object and scans the cluster using the cis and cis-node profiles. For example: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Create the ScanSettingBinding object by running: USD oc create -f <file-name>.yaml -n openshift-compliance At this point in the process, the ScanSettingBinding object is reconciled and based on the Binding and the Bound settings. The Compliance Operator creates a ComplianceSuite object and the associated ComplianceScan objects. Follow the compliance scan progress by running: USD oc get compliancescan -w -n openshift-compliance The scans progress through the scanning phases and eventually reach the DONE phase when complete. In most cases, the result of the scan is NON-COMPLIANT . You can review the scan results and start applying remediations to make the cluster compliant. See Managing Compliance Operator remediation for more information. 5.4.2. Scheduling the result server pod on a worker node The result server pod mounts the persistent volume (PV) that stores the raw Asset Reporting Format (ARF) scan results. The nodeSelector and tolerations attributes enable you to configure the location of the result server pod. This is helpful for those environments where control plane nodes are not permitted to mount persistent volumes. Procedure Create a ScanSetting custom resource (CR) for the Compliance Operator: Define the ScanSetting CR, and save the YAML file, for example, rs-workers.yaml : apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * 1 The Compliance Operator uses this node to store scan results in ARF format. 2 The result server pod tolerates all taints. To create the ScanSetting CR, run the following command: USD oc create -f rs-workers.yaml Verification To verify that the ScanSetting object is created, run the following command: USD oc get scansettings rs-on-workers -n openshift-compliance -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: "2021-11-19T19:36:36Z" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: "48305" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true 5.5. Understanding the Compliance Operator The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. The Compliance Operator assesses compliance of both the Kubernetes API resources of OpenShift Container Platform, as well as the nodes running the cluster. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content. Important The Compliance Operator is available for Red Hat Enterprise Linux CoreOS (RHCOS) deployments only. 5.5.1. Compliance Operator profiles There are several profiles available as part of the Compliance Operator installation. You can use the oc get command to view available profiles, profile details, and specific rules. View the available profiles: USD oc get -n <namespace> profiles.compliance This example displays the profiles in the default openshift-compliance namespace: USD oc get -n openshift-compliance profiles.compliance Example output NAME AGE ocp4-cis 32m ocp4-cis-node 32m ocp4-e8 32m ocp4-moderate 32m ocp4-moderate-node 32m ocp4-nerc-cip 32m ocp4-nerc-cip-node 32m ocp4-pci-dss 32m ocp4-pci-dss-node 32m rhcos4-e8 32m rhcos4-moderate 32m rhcos4-nerc-cip 32m These profiles represent different compliance benchmarks. Each profile has the product name that it applies to added as a prefix to the profile's name. ocp4-e8 applies the Essential 8 benchmark to the OpenShift Container Platform product, while rhcos4-e8 applies the Essential 8 benchmark to the Red Hat Enterprise Linux CoreOS (RHCOS) product. View the details of a profile: USD oc get -n <namespace> -oyaml profiles.compliance <profile name> This example displays the details of the rhcos4-e8 profile: USD oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8 Example output apiVersion: compliance.openshift.io/v1alpha1 description: |- This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: ... id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos426smj compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight View the rules within a desired profile: USD oc get -n <namespace> -oyaml rules.compliance <rule_name> This example displays the rhcos4-audit-rules-login-events rule in the rhcos4 profile: USD oc get -n openshift-compliance -oyaml rules.compliance rhcos4-audit-rules-login-events Example output apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos426smj compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. 5.6. Managing the Compliance Operator This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom ProfileBundle object. 5.6.1. Updating security content Security content is shipped as container images that the ProfileBundle objects refer to. To accurately track updates to ProfileBundles and the custom resources parsed from the bundles such as rules or profiles, identify the container image with the compliance content using a digest instead of a tag: Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: name: rhcos4 spec: contentImage: quay.io/user/ocp4-openscap-content@sha256:a1749f5150b19a9560a5732fe48a89f07bffc79c0832aa8c49ee5504590ae687 1 contentFile: ssg-rhcos4-ds.xml 1 Security container image. Each ProfileBundle is backed by a deployment. When the Compliance Operator detects that the container image digest has changed, the deployment is updated to reflect the change and parse the content again. Using the digest instead of a tag ensures that you use a stable and predictable set of profiles. 5.6.2. Using image streams The contentImage reference points to a valid ImageStreamTag , and the Compliance Operator ensures that the content stays up to date automatically. Note ProfileBundle objects also accept ImageStream references. Example image stream USD oc get is -n openshift-compliance Example output NAME IMAGE REPOSITORY TAGS UPDATED openscap-ocp4-ds image-registry.openshift-image-registry.svc:5000/openshift-compliance/openscap-ocp4-ds latest 32 seconds ago Procedure Ensure that the lookup policy is set to local: USD oc patch is openscap-ocp4-ds \ -p '{"spec":{"lookupPolicy":{"local":true}}}' \ --type=merge imagestream.image.openshift.io/openscap-ocp4-ds patched -n openshift-compliance Use the name of the ImageStreamTag for the ProfileBundle by retrieving the istag name: USD oc get istag -n openshift-compliance Example output NAME IMAGE REFERENCE UPDATED openscap-ocp4-ds:latest image-registry.openshift-image-registry.svc:5000/openshift-compliance/openscap-ocp4-ds@sha256:46d7ca9b7055fe56ade818ec3e62882cfcc2d27b9bf0d1cbae9f4b6df2710c96 3 minutes ago Create the ProfileBundle : USD cat << EOF | oc create -f - apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: name: mybundle spec: contentImage: openscap-ocp4-ds:latest contentFile: ssg-rhcos4-ds.xml EOF This ProfileBundle will track the image and any changes that are applied to it, such as updating the tag to point to a different hash, will immediately be reflected in the ProfileBundle . 5.6.3. ProfileBundle CR example The bundle object needs two pieces of information: the URL of a container image that contains the contentImage and the file that contains the compliance content. The contentFile parameter is relative to the root of the file system. The built-in rhcos4 ProfileBundle object can be defined in the example below: apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: name: rhcos4 spec: contentImage: quay.io/complianceascode/ocp4:latest 1 contentFile: ssg-rhcos4-ds.xml 2 1 Content image location. 2 Location of the file containing the compliance content. Important The base image used for the content images must include coreutils . 5.6.4. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 5.7. Tailoring the Compliance Operator While the Compliance Operator comes with ready-to-use profiles, they must be modified to fit the organizations' needs and requirements. The process of modifying a profile is called tailoring . The Compliance Operator provides an object to easily tailor profiles called a TailoredProfile . This assumes that you are extending a pre-existing profile, and allows you to enable and disable rules and values which come from the ProfileBundle . Note You will only be able to use rules and variables that are available as part of the ProfileBundle that the profile you want to extend belongs to. 5.7.1. Using tailored profiles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map, which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Browse the available rules for the Red Hat Enterprise Linux CoreOS (RHCOS) ProfileBundle : USD oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Browse the available variables in the same ProfileBundle : USD oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Create a tailored profile named nist-moderate-modified : Choose which rules you want to add to the nist-moderate-modified tailored profile. This example extends the rhcos4-moderate profile by disabling two rules and changing one value. Use the rationale value to describe why these changes were made: Example new-profile-node.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive Table 5.2. Attributes for spec variables Attribute Description extends Name of the Profile object upon which this TailoredProfile is built. title Human-readable title of the TailoredProfile . disableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be disabled. The rationale value is human-readable text describing why the rule is disabled. enableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be enabled. The rationale value is human-readable text describing why the rule is enabled. description Human-readable text describing the TailoredProfile . setValues A list of name, rationale, and value groupings. Each name refers to a name of the value set. The rationale is human-readable text describing the set. The value is the actual setting. Create the TailoredProfile object: USD oc create -n openshift-compliance -f new-profile-node.yaml 1 1 The TailoredProfile object is created in the default openshift-compliance namespace. Example output tailoredprofile.compliance.openshift.io/nist-moderate-modified created Define the ScanSettingBinding object to bind the new nist-moderate-modified tailored profile to the default ScanSetting object. Example new-scansettingbinding.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default Create the ScanSettingBinding object: USD oc create -n openshift-compliance -f new-scansettingbinding.yaml Example output scansettingbinding.compliance.openshift.io/nist-moderate-modified created 5.8. Retrieving Compliance Operator raw results When proving compliance for your OpenShift Container Platform cluster, you might need to provide the scan results for auditing purposes. 5.8.1. Obtaining Compliance Operator raw results from a persistent volume Procedure The Compliance Operator generates and stores the raw results in a persistent volume. These results are in Asset Reporting Format (ARF). Explore the ComplianceSuite object: USD oc get compliancesuites nist-moderate-modified -o json \ | jq '.status.scanStatuses[].resultsStorage' { "name": "rhcos4-moderate-worker", "namespace": "openshift-compliance" } { "name": "rhcos4-moderate-master", "namespace": "openshift-compliance" } This shows the persistent volume claims where the raw results are accessible. Verify the raw data location by using the name and namespace of one of the results: USD oc get pvc -n openshift-compliance rhcos4-moderate-worker Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m Fetch the raw results by spawning a pod that mounts the volume and copying the results: Example pod apiVersion: "v1" kind: Pod metadata: name: pv-extract spec: containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi8/ubi command: ["sleep", "3000"] volumeMounts: - mountPath: "/workers-scan-results" name: workers-scan-vol volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker After the pod is running, download the results: USD oc cp pv-extract:/workers-scan-results . Important Spawning a pod that mounts the persistent volume will keep the claim as Bound . If the volume's storage class in use has permissions set to ReadWriteOnce , the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will not be possible for the Operator to schedule a pod and continue storing results in this location. After the extraction is complete, the pod can be deleted: USD oc delete pod pv-extract 5.9. Managing Compliance Operator result and remediation Each ComplianceCheckResult represents a result of one compliance rule check. If the rule can be remediated automatically, a ComplianceRemediation object with the same name, owned by the ComplianceCheckResult is created. Unless requested, the remediations are not applied automatically, which gives an OpenShift Container Platform administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified. 5.9.1. Filters for compliance check results By default, the ComplianceCheckResult objects are labeled with several useful labels that allow you to query the checks and decide on the steps after the results are generated. List checks that belong to a specific suite: USD oc get compliancecheckresults -l compliance.openshift.io/suite=example-compliancesuite List checks that belong to a specific scan: USD oc get compliancecheckresults -l compliance.openshift.io/scan=example-compliancescan Not all ComplianceCheckResult objects create ComplianceRemediation objects. Only ComplianceCheckResult objects that can be remediated automatically do. A ComplianceCheckResult object has a related remediation if it is labeled with the compliance.openshift.io/automated-remediation label. The name of the remediation is the same as the name of the check. List all failing checks that can be remediated automatically: USD oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation' List all failing checks that must be remediated manually: USD oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation' The manual remediation steps are typically stored in the description attribute in the ComplianceCheckResult object. Table 5.3. ComplianceCheckResult Status ComplianceCheckResult Status Description PASS Compliance check ran to completion and passed. FAIL Compliance check ran to completion and failed. INFO Compliance check ran to completion and found something not severe enough to be considered an error. MANUAL Compliance check does not have a way to automatically assess the success or failure and must be checked manually. INCONSISTENT Compliance check reports different results from different sources, typically cluster nodes. ERROR Compliance check ran, but could not complete properly. NOT-APPLICABLE Compliance check did not run because it is not applicable or not selected. 5.9.2. Reviewing a remediation Review both the ComplianceRemediation object and the ComplianceCheckResult object that owns the remediation. The ComplianceCheckResult object contains human-readable descriptions of what the check does and the hardening trying to prevent, as well as other metadata like the severity and the associated security controls. The ComplianceRemediation object represents a way to fix the problem described in the ComplianceCheckResult . After first scan, check for remediations with the state MissingDependencies . Below is an example of a check and a remediation called sysctl-net-ipv4-conf-all-accept-redirects . This example is redacted to only show spec and status and omits metadata : spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied The remediation payload is stored in the spec.current attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a MachineConfig object. For Platform scans, the remediation payload is often a different kind of an object (for example, a ConfigMap or Secret object), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text. To see exactly what the remediation does when applied, the MachineConfig object contents use the Ignition objects for the configuration. Refer to the Ignition specification for further information about the format. In our example, the spec.config.storage.files[0].path attribute specifies the file that is being create by this remediation ( /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf ) and the spec.config.storage.files[0].contents.source attribute specifies the contents of that file. Note The contents of the files are URL-encoded. Use the following Python script to view the contents: USD echo "net.ipv4.conf.all.accept_redirects%3D0" | python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))" Example output net.ipv4.conf.all.accept_redirects=0 5.9.3. Applying remediation when using customized machine config pools When you create a custom MachineConfigPool , add a label to the MachineConfigPool so that machineConfigPoolSelector present in the KubeletConfig can match the label with MachineConfigPool . Important Do not set protectKernelDefaults: false in the KubeletConfig file, because the MachineConfigPool object might fail to unpause unexpectedly after the Compliance Operator finishes applying remediation. Procedure List the nodes. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.23.3+d99c04f ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.23.3+d99c04f ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.23.3+d99c04f ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.23.3+d99c04f ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.23.3+d99c04f Add a label to nodes. USD oc label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>= Example output node/ip-10-0-166-81.us-east-2.compute.internal labeled Create custom MachineConfigPool CR. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: "" 1 The labels field defines label name to add for Machine config pool(MCP). Verify MCP created successfully. USD oc get mcp -w 5.9.4. Applying a remediation The boolean attribute spec.apply controls whether the remediation should be applied by the Compliance Operator. You can apply the remediation by setting the attribute to true : USD oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{"spec":{"apply":true}}' --type=merge After the Compliance Operator processes the applied remediation, the status.ApplicationState attribute would change to Applied or to Error if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a MachineConfig object named 75-USDscan-name-USDsuite-name . That MachineConfig object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node. Note that when the Machine Config Operator applies a new MachineConfig object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite 75-USDscan-name-USDsuite-name MachineConfig object. To prevent applying the remediation immediately, you can pause the machine config pool by setting the .spec.paused attribute of a MachineConfigPool object to true . The Compliance Operator can apply remediations automatically. Set autoApplyRemediations: true in the ScanSetting top-level object. Warning Applying remediations automatically should only be done with careful consideration. 5.9.5. Remediating a platform check manually Checks for Platform scans typically have to be remediated manually by the administrator for two reasons: It is not always possible to automatically determine the value that must be set. One of the checks requires that a list of allowed registries is provided, but the scanner has no way of knowing which registries the organization wants to allow. Different checks modify different API objects, requiring automated remediation to possess root or superuser access to modify objects in the cluster, which is not advised. Procedure The example below uses the ocp4-ocp-allowed-registries-for-import rule, which would fail on a default OpenShift Container Platform installation. Inspect the rule oc get rule.compliance/ocp4-ocp-allowed-registries-for-import -oyaml , the rule is to limit the registries the users are allowed to import images from by setting the allowedRegistriesForImport attribute, The warning attribute of the rule also shows the API object checked, so it can be modified and remediate the issue: USD oc edit image.config.openshift.io/cluster Example output apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2020-09-10T10:12:54Z" generation: 2 name: cluster resourceVersion: "363096" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 Re-run the scan: USD oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan= 5.9.6. Updating remediations When a new version of compliance content is used, it might deliver a new and different version of a remediation than the version. The Compliance Operator will keep the old version of the remediation applied. The OpenShift Container Platform administrator is also notified of the new version to review and apply. A ComplianceRemediation object that had been applied earlier, but was updated changes its status to Outdated . The outdated objects are labeled so that they can be searched for easily. The previously applied remediation contents would then be stored in the spec.outdated attribute of a ComplianceRemediation object and the new updated contents would be stored in the spec.current attribute. After updating the content to a newer version, the administrator then needs to review the remediation. As long as the spec.outdated attribute exists, it would be used to render the resulting MachineConfig object. After the spec.outdated attribute is removed, the Compliance Operator re-renders the resulting MachineConfig object, which causes the Operator to push the configuration to the nodes. Procedure Search for any outdated remediations: USD oc get complianceremediations -lcomplianceoperator.openshift.io/outdated-remediation= Example output NAME STATE workers-scan-no-empty-passwords Outdated The currently applied remediation is stored in the Outdated attribute and the new, unapplied remediation is stored in the Current attribute. If you are satisfied with the new version, remove the Outdated field. If you want to keep the updated content, remove the Current and Outdated attributes. Apply the newer version of the remediation: USD oc patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{"op":"remove", "path":/spec/outdated}]' The remediation state will switch from Outdated to Applied : USD oc get complianceremediations workers-scan-no-empty-passwords Example output NAME STATE workers-scan-no-empty-passwords Applied The nodes will apply the newer remediation version and reboot. 5.9.7. Unapplying a remediation It might be required to unapply a remediation that was previously applied. Procedure Set the apply flag to false : USD oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects -p '{"spec":{"apply":false}}' --type=merge The remediation status will change to NotApplied and the composite MachineConfig object would be re-rendered to not include the remediation. Important All affected nodes with the remediation will be rebooted. 5.9.8. Removing a KubeletConfig remediation KubeletConfig remediations are included in node-level profiles. In order to remove a KubeletConfig remediation, you must manually remove it from the KubeletConfig objects. This example demonstrates how to remove the compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation. Procedure Locate the scan-name and compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation: USD oc get remediation one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: "2022-01-05T19:52:27Z" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: "84820" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied 1 The scan name of the remediation. 2 The remediation that was added to the KubeletConfig objects. Note If the remediation invokes an evictionHard kubelet configuration, you must specify all of the evictionHard parameters: memory.available , nodefs.available , nodefs.inodesFree , imagefs.available , and imagefs.inodesFree . If you do not specify all parameters, only the specified parameters are applied and the remediation will not function properly. Remove the remediation: Set apply to false for the remediation object: USD oc patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{"spec":{"apply":false}}' --type=merge Using the scan-name , find the KubeletConfig object that the remediation was applied to: USD oc get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master Example output NAME AGE compliance-operator-kubelet-master 2m34s Manually remove the remediation, imagefs.available: 10% , from the KubeletConfig object: USD oc edit KubeletConfig compliance-operator-kubelet-master Important All affected nodes with the remediation will be rebooted. Note You must also exclude the rule from any scheduled scans in your tailored profiles that auto-applies the remediation, otherwise, the remediation will be re-applied during the scheduled scan. 5.9.9. Inconsistent ComplianceScan The ScanSetting object lists the node roles that the compliance scans generated from the ScanSetting or ScanSettingBinding objects would scan. Each node role usually maps to a machine config pool. Important It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical. If some of the results are different from others, the Compliance Operator flags a ComplianceCheckResult object where some of the nodes will report as INCONSISTENT . All ComplianceCheckResult objects are also labeled with compliance.openshift.io/inconsistent-check . Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the compliance.openshift.io/most-common-status annotation and the annotation compliance.openshift.io/inconsistent-source contains pairs of hostname:status of check statuses that differ from the most common status. If no common state can be found, all the hostname:status pairs are listed in the compliance.openshift.io/inconsistent-source annotation . If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the compliance.openshift.io/rescan= option: USD oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan= 5.9.10. Additional resources Modifying nodes . 5.10. Performing advanced Compliance Operator tasks The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling. 5.10.1. Using the ComplianceSuite and ComplianceScan objects directly While it is recommended that users take advantage of the ScanSetting and ScanSettingBinding objects to define the suites and scans, there are valid use cases to define the ComplianceSuite objects directly: Specifying only a single rule to scan. This can be useful for debugging together with the debug: true attribute which increases the OpenSCAP scanner verbosity, as the debug mode tends to get quite verbose otherwise. Limiting the test to one rule helps to lower the amount of debug information. Providing a custom nodeSelector. In order for a remediation to be applicable, the nodeSelector must match a pool. Pointing the Scan to a bespoke config map with a tailoring file. For testing or development when the overhead of parsing profiles from bundles is not required. The following example shows a ComplianceSuite that scans the worker machines with only a single rule: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: "" The ComplianceSuite object and the ComplianceScan objects referred to above specify several attributes in a format that OpenSCAP expects. To find out the profile, content, or rule values, you can start by creating a similar Suite from ScanSetting and ScanSettingBinding or inspect the objects parsed from the ProfileBundle objects like rules or profiles. Those objects contain the xccdf_org identifiers you can use to refer to them from a ComplianceSuite . 5.10.2. Using raw tailored profiles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Create the ConfigMap object from a file: USD oc create configmap <scan_name> --from-file=tailoring.xml=/path/to/the/tailoringFile.xml Reference the tailoring file in a scan that belongs to a suite: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest debug: true tailoringConfigMap: name: <scan_name> nodeSelector: node-role.kubernetes.io/worker: "" 5.10.3. Performing a rescan Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the compliance.openshift.io/rescan= option: USD oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan= A rescan generates four additional mc for rhcos-moderate profile: USD oc get mc Example output 75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub Important When the scan setting default-auto-apply label is applied, remediations are applied automatically and outdated remediations automatically update. If there are remediations that were not applied due to dependencies, or remediations that had been outdated, rescanning applies the remediations and might trigger a reboot. Only remediations that use MachineConfig objects trigger reboots. If there are no updates or dependencies to be applied, no reboot occurs. 5.10.4. Setting custom storage size for results While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources. A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment. 5.10.4.1. Using custom result storage values Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute. Important If your cluster does not specify a default storage class, this attribute must be set. Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results: Example ScanSetting CR apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' 5.10.5. Applying remediations generated by suite scans Although you can use the autoApplyRemediations boolean parameter in a ComplianceSuite object, you can alternatively annotate the object with compliance.openshift.io/apply-remediations . This allows the Operator to apply all of the created remediations. Procedure Apply the compliance.openshift.io/apply-remediations annotation by running: USD oc annotate compliancesuites/<suite-_name> compliance.openshift.io/apply-remediations= 5.10.6. Automatically update remediations In some cases, a scan with newer content might mark remediations as OUTDATED . As an administrator, you can apply the compliance.openshift.io/remove-outdated annotation to apply new remediations and remove the outdated ones. Procedure Apply the compliance.openshift.io/remove-outdated annotation: USD oc annotate compliancesuites/<suite_name> compliance.openshift.io/remove-outdated= Alternatively, set the autoUpdateRemediations flag in a ScanSetting or ComplianceSuite object to update the remediations automatically. 5.10.7. Creating a custom SCC for the Compliance Operator In some environments, you must create a custom Security Context Constraints (SCC) file to ensure the correct permissions are available to the Compliance Operator api-resource-collector . Prerequisites You must have admin privileges. Procedure Define the SCC in a YAML file named restricted-adjusted-compliance.yaml : SecurityContextConstraints object definition allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret 1 The priority of this SCC must be higher than any other SCC that applies to the system:authenticated group. 2 Service Account used by Compliance Operator Scanner pod. Create the SCC: USD oc create -f restricted-adjusted-compliance.yaml Example output securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created Verification Verify the SCC was created: USD oc get scc restricted-adjusted-compliance Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] 5.10.8. Additional resources Managing security context constraints 5.11. Troubleshooting the Compliance Operator This section describes how to troubleshoot the Compliance Operator. The information can be useful either to diagnose a problem or provide information in a bug report. Some general tips: The Compliance Operator emits Kubernetes events when something important happens. You can either view all events in the cluster using the command: USD oc get events -n openshift-compliance Or view events for an object like a scan using the command: USD oc describe compliancescan/<scan_name> The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a ComplianceRemediation cannot be applied, view the messages from the remediationctrl controller. You can filter the messages from a single controller by parsing with jq : USD oc logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == "profilebundlectrl")' The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use date -d @timestamp --utc , for example: USD date -d @1596184628.955853 --utc Many custom resources, most importantly ComplianceSuite and ScanSetting , allow the debug option to be set. Enabling this option increases verbosity of the OpenSCAP scanner pods, as well as some other helper pods. If a single rule is passing or failing unexpectedly, it could be helpful to run a single scan or a suite with only that rule to find the rule ID from the corresponding ComplianceCheckResult object and use it as the rule attribute value in a Scan CR. Then, together with the debug option enabled, the scanner container logs in the scanner pod would show the raw OpenSCAP logs. 5.11.1. Anatomy of a scan The following sections outline the components and stages of Compliance Operator scans. 5.11.1.1. Compliance sources The compliance content is stored in Profile objects that are generated from a ProfileBundle object. The Compliance Operator creates a ProfileBundle object for the cluster and another for the cluster nodes. USD oc get profilebundle.compliance USD oc get profile.compliance The ProfileBundle objects are processed by deployments labeled with the Bundle name. To troubleshoot an issue with the Bundle , you can find the deployment and view logs of the pods in a deployment: USD oc logs -lprofile-bundle=ocp4 -c profileparser USD oc get deployments,pods -lprofile-bundle=ocp4 USD oc logs pods/<pod-name> USD oc describe pod/<pod-name> -c profileparser 5.11.1.2. The ScanSetting and ScanSettingBinding objects lifecycle and debugging With valid compliance content sources, the high-level ScanSetting and ScanSettingBinding objects can be used to generate ComplianceSuite and ComplianceScan objects: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true # For each role, a separate scan will be created pointing # to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Both ScanSetting and ScanSettingBinding objects are handled by the same controller tagged with logger=scansettingbindingctrl . These objects have no status. Any issues are communicated in form of events: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created Now a ComplianceSuite object is created. The flow continues to reconcile the newly created ComplianceSuite . 5.11.1.3. ComplianceSuite custom resource lifecycle and debugging The ComplianceSuite CR is a wrapper around ComplianceScan CRs. The ComplianceSuite CR is handled by controller tagged with logger=suitectrl . This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the suitectrl also handles creating a CronJob CR that re-runs the scans in the suite after the initial run is done: USD oc get cronjobs Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m For the most important issues, events are emitted. View them with oc describe compliancesuites/<name> . The Suite objects also have a Status subresource that is updated when any of Scan objects that belong to this suite update their Status subresource. After all expected scans are created, control is passed to the scan controller. 5.11.1.4. ComplianceScan custom resource lifecycle and debugging The ComplianceScan CRs are handled by the scanctrl controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases: 5.11.1.4.1. Pending phase The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase. 5.11.1.4.2. Launching phase In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps: USD oc get cm -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script= These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan to store the raw ARF results: USD oc get pvc -lcompliance.openshift.io/scan-name=<scan_name> The PVCs are mounted by a per-scan ResultServer deployment. A ResultServer is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the ResultServer deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the ResultServer is protected by mutual TLS protocols. Finally, the scanner pods are launched in this phase; one scanner pod for a Platform scan instance and one scanner pod per matching node for a node scan instance. The per-node pods are labeled with the node name. Each pod is always labeled with the ComplianceScan name: USD oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels Example output NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner At this point, the scan proceeds to the Running phase. 5.11.1.4.3. Running phase The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase: init container : There is one init container called content-container . It runs the contentImage container and executes a single command that copies the contentFile to the /content directory shared with the other containers in this pod. scanner : This container runs the scan. For node scans, the container mounts the node filesystem as /host and mounts the content delivered by the init container. The container also mounts the entrypoint ConfigMap created in the Launching phase and executes it. The default script in the entrypoint ConfigMap executes OpenSCAP and stores the result files in the /results directory shared between the pod's containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the debug flag. logcollector : The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the ResultServer and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a ConfigMap. These result config maps are labeled with the scan name ( compliance.openshift.io/scan-name=<scan_name> ): USD oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Example output Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version="1.0" encoding="UTF-8"?> ... Scanner pods for Platform scans are similar, except: There is one extra init container called api-resource-collector that reads the OpenSCAP content provided by the content-container init, container, figures out which API resources the content needs to examine and stores those API resources to a shared directory where the scanner container would read them from. The scanner container does not need to mount the host file system. When the scanner pods are done, the scans move on to the Aggregating phase. 5.11.1.4.4. Aggregating phase In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result ConfigMap objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a ComplianceRemediation object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container. When a config map is processed by an aggregator pod, it is labeled the compliance-remediations/processed label. The result of this phase are ComplianceCheckResult objects: USD oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium and ComplianceRemediation objects: USD oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase. 5.11.1.4.5. Done phase In the final scan phase, the scan resources are cleaned up if needed and the ResultServer deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the scan instance would then recreate the deployment again. It is also possible to trigger a re-run of a scan in the Done phase by annotating it: USD oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan= After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with autoApplyRemediations: true . The OpenShift Container Platform administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the ComplianceSuite controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the ComplianceRemediation controller takes over. 5.11.1.5. ComplianceRemediation controller lifecycle and debugging The example scan has reported some findings. One of the remediations can be enabled by toggling its apply attribute to true : USD oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{"spec":{"apply":true}}' --type=merge The ComplianceRemediation controller ( logger=remediationctrl ) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite MachineConfig object that contains all the applied remediations. The MachineConfig object always begins with 75- and is named after the scan and the suite: USD oc get mc | grep 75- Example output 75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s The remediations the mc currently consists of are listed in the machine config's annotations: USD oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements Example output Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod: The ComplianceRemediation controller's algorithm works like this: All currently applied remediations are read into an initial remediation set. If the reconciled remediation is supposed to be applied, it is added to the set. A MachineConfig object is rendered from the set and annotated with names of remediations in the set. If the set is empty (the last remediation was unapplied), the rendered MachineConfig object is removed. If and only if the rendered machine config is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted). Creating or modifying a MachineConfig object triggers a reboot of nodes that match the machineconfiguration.openshift.io/role label - see the Machine Config Operator documentation for more details. The remediation loop ends once the rendered machine config is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it: USD oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan= The scan will run and finish. Check for the remediation to pass: USD oc get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod Example output NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium 5.11.1.6. Useful labels Each pod that is spawned by the Compliance Operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the compliance.openshift.io/scan-name label. The workload identifier is labeled with the workload label. The Compliance Operator schedules the following workloads: scanner : Performs the compliance scan. resultserver : Stores the raw results for the compliance scan. aggregator : Aggregates the results, detects inconsistencies and outputs result objects (checkresults and remediations). suitererunner : Will tag a suite to be re-run (when a schedule is set). profileparser : Parses a datastream and creates the appropriate profiles, rules and variables. When debugging and logs are required for a certain workload, run: USD oc logs -l workload=<workload_name> -c <container_name> 5.11.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 5.12. Uninstalling the Compliance Operator You can remove the OpenShift Compliance Operator from your cluster by using the OpenShift Container Platform web console. 5.12.1. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform To remove the Compliance Operator, you must first delete the Compliance Operator custom resource definitions (CRDs). After the CRDs are removed, you can then remove the Operator and its namespace by deleting the openshift-compliance project. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift Compliance Operator must be installed. Procedure To remove the Compliance Operator by using the OpenShift Container Platform web console: Remove CRDs that were installed by the Compliance Operator: Switch to the Administration CustomResourceDefinitions page. Search for compliance.openshift.io in the Name field. Click the Options menu to each of the following CRDs, and select Delete Custom Resource Definition : ComplianceCheckResult ComplianceRemediation ComplianceScan ComplianceSuite ProfileBundle Profile Rule ScanSettingBinding ScanSetting TailoredProfile Variable Remove the OpenShift Compliance project: Switch to the Home Projects page. Click the Options menu to the openshift-compliance project, and select Delete Project . Confirm the deletion by typing openshift-compliance in the dialog box, and click Delete . 5.13. Understanding the Custom Resource Definitions The Compliance Operator in the OpenShift Container Platform provides you with several Custom Resource Definitions (CRDs) to accomplish the compliance scans. To run a compliance scan, it leverages the predefined security policies, which are derived from the ComplianceAsCode community project. The Compliance Operator converts these security policies into CRDs, which you can use to run compliance scans and get remediations for the issues found. 5.13.1. CRDs workflow The CRD provides you the following workflow to complete the compliance scans: Define your compliance scan requirements Configure the compliance scan settings Process compliance requirements with compliance scans settings Monitor the compliance scans Check the compliance scan results 5.13.2. Defining the compliance scan requirements By default, the Compliance Operator CRDs include ProfileBundle and Profile objects, in which you can define and set the rules for your compliance scan requirements. You can also customize the default profiles by using a TailoredProfile object. 5.13.2.1. ProfileBundle object When you install the Compliance Operator, it includes ready-to-run ProfileBundle object. The Compliance Operator parses the ProfileBundle object and creates a Profile object for each profile in the bundle. It also parses Rule and Variable objects, which are used by the Profile object. Example ProfileBundle object apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance spec: contentFile: ssg-ocp4-ds.xml 1 contentImage: quay.io/complianceascode/ocp4:latest 2 status: dataStreamStatus: VALID 3 1 Specify a path from the root directory (/) where the profile file is located. 2 Specify the container image that encapsulates the profile files. 3 Indicates whether the Compliance Operator was able to parse the content files. Note When the contentFile fails, an errorMessage attribute appears, which provides details of the error that occurred. Troubleshooting When you roll back to a known content image from an invalid image, the ProfileBundle object stops responding and displays PENDING state. As a workaround, you can move to a different image than the one. Alternatively, you can delete and re-create the ProfileBundle object to return to the working state. 5.13.2.2. Profile object The Profile object defines the rules and variables that can be evaluated for a certain compliance standard. It contains parsed out details about an OpenSCAP profile, such as its XCCDF identifier and profile checks for a Node or Platform type. You can either directly use the Profile object or further customize it using a TailorProfile object. Note You cannot create or modify the Profile object manually because it is derived from a single ProfileBundle object. Typically, a single ProfileBundle object can include several Profile objects. Example Profile object apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: "YYYY-MM-DDTMM:HH:SSZ" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: "<version number>" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile> 1 Specify the XCCDF name of the profile. Use this identifier when you define a ComplianceScan object as the value of the profile attribute of the scan. 2 Specify either a Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 3 Specify the list of rules for the profile. Each rule corresponds to a single check. 5.13.2.3. Rule object The Rule object, which forms the profiles, are also exposed as objects. Use the Rule object to define your compliance check requirements and specify how it could be fixed. Example Rule object apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule> 1 Specify the type of check this rule executes. Node profiles scan the cluster nodes and Platform profiles scan the Kubernetes platform. An empty value indicates there is no automated check. 2 Specify the XCCDF name of the rule, which is parsed directly from the datastream. 3 Specify the severity of the rule when it fails. Note The Rule object gets an appropriate label for an easy identification of the associated ProfileBundle object. The ProfileBundle also gets specified in the OwnerReferences of this object. 5.13.2.4. TailoredProfile object Use the TailoredProfile object to modify the default Profile object based on your organization requirements. You can enable or disable rules, set variable values, and provide justification for the customization. After validation, the TailoredProfile object creates a ConfigMap , which can be referenced by a ComplianceScan object. Tip You can use the TailoredProfile object by referencing it in a ScanSettingBinding object. For more information about ScanSettingBinding , see ScanSettingBinding object. Example TailoredProfile object apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4 1 This is optional. Name of the Profile object upon which the TailoredProfile is built. If no value is set, a new profile is created from the enableRules list. 2 Specifies the XCCDF name of the tailored profile. 3 Specifies the ConfigMap name, which can be used as the value of the tailoringConfigMap.name attribute of a ComplianceScan . 4 Shows the state of the object such as READY , PENDING , and FAILURE . If the state of the object is ERROR , then the attribute status.errorMessage provides the reason for the failure. With the TailoredProfile object, it is possible to create a new Profile object using the TailoredProfile construct. To create a new Profile , set the following configuration parameters : an appropriate title extends value must be empty scan type annotation on the TailoredProfile object: compliance.openshift.io/product-type: <scan type> Note If you have not set the product-type annotation, the Compliance Operator defaults to Platform scan type. Adding the -node suffix to the name of the TailoredProfile object results in node scan type. 5.13.3. Configuring the compliance scan settings After you have defined the requirements of the compliance scan, you can configure it by specifying the type of the scan, occurrence of the scan, and location of the scan. To do so, Compliance Operator provides you with a ScanSetting object. 5.13.3.1. ScanSetting object Use the ScanSetting object to define and reuse the operational policies to run your scans. By default, the Compliance Operator creates the following ScanSetting objects: default - it runs a scan every day at 1 AM on both master and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Remediation is neither applied nor updated automatically. default-auto-apply - it runs a scan every day at 1AM on both control plane and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Both autoApplyRemediations and autoUpdateRemediations are set to true. Example ScanSetting object apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: <name of the scan> autoApplyRemediations: false 1 autoUpdateRemediations: false 2 schedule: "0 1 * * *" 3 rawResultStorage: size: "2Gi" 4 rotation: 10 5 roles: 6 - worker - master 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Set to true to enable auto remediations for content updates. Set to false to disable auto remediations for content updates. 3 Specify how often the scan should be run in cron format. 4 Specify the storage size that should be created for the scan to store the raw results. The default value is 1Gi 5 Specify the amount of scans for which the raw results will be stored. The default value is 3 . As the older results get rotated, the administrator has to store the results elsewhere before the rotation happens. Note To disable the rotation policy, set the value to 0 . 6 Specify the node-role.kubernetes.io label value to schedule the scan for Node type. This value has to match the name of a MachineConfigPool . 5.13.4. Processing the compliance scan requirements with compliance scans settings When you have defined the compliance scan requirements and configured the settings to run the scans, then the Compliance Operator processes it using the ScanSettingBinding object. 5.13.4.1. ScanSettingBinding object Use the ScanSettingBinding object to specify your compliance requirements with reference to the Profile or TailoredProfile object. It is then linked to a ScanSetting object, which provides the operational constraints for the scan. Then the Compliance Operator generates the ComplianceSuite object based on the ScanSetting and ScanSettingBinding objects. Example ScanSettingBinding object apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 1 Specify the details of Profile or TailoredProfile object to scan your environment. 2 Specify the operational constraints, such as schedule and storage size. The creation of ScanSetting and ScanSettingBinding objects results in the compliance suite. To get the list of compliance suite, run the following command: USD oc get compliancesuites Important If you delete ScanSettingBinding , then compliance suite also is deleted. 5.13.5. Tracking the compliance scans After the creation of compliance suite, you can monitor the status of the deployed scans using the ComplianceSuite object. 5.13.5.1. ComplianceSuite object The ComplianceSuite object helps you keep track of the state of the scans. It contains the raw settings to create scans and the overall result. For Node type scans, you should map the scan to the MachineConfigPool , since it contains the remediations for any issues. If you specify a label, ensure it directly applies to a pool. Example ComplianceSuite object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name of the scan> spec: autoApplyRemediations: false 1 schedule: "0 1 * * *" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" nodeSelector: node-role.kubernetes.io/worker: "" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Specify how often the scan should be run in cron format. 3 Specify a list of scan specifications to run in the cluster. 4 Indicates the progress of the scans. 5 Indicates the overall verdict of the suite. The suite in the background creates the ComplianceScan object based on the scans parameter. You can programmatically fetch the ComplianceSuites events. To get the events for the suite, run the following command: USD oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite> Important You might create errors when you manually define the ComplianceSuite , since it contains the XCCDF attributes. 5.13.5.2. Advanced ComplianceScan Object The Compliance Operator includes options for advanced users for debugging or integrating with existing tooling. While it is recommended that you not create a ComplianceScan object directly, you can instead manage it using a ComplianceSuite object. Example Advanced ComplianceScan object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name of the scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest 3 rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" 4 nodeSelector: 5 node-role.kubernetes.io/worker: "" status: phase: DONE 6 result: NON-COMPLIANT 7 1 Specify either Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 2 Specify the XCCDF identifier of the profile that you want to run. 3 Specify the container image that encapsulates the profile files. 4 It is optional. Specify the scan to run a single rule. This rule has to be identified with the XCCDF ID, and has to belong to the specified profile. Note If you skip the rule parameter, then scan runs for all the available rules of the specified profile. 5 If you are on the OpenShift Container Platform and wants to generate a remediation, then nodeSelector label has to match the MachineConfigPool label. Note If you do not specify nodeSelector parameter or match the MachineConfig label, scan will still run, but it will not create remediation. 6 Indicates the current phase of the scan. 7 Indicates the verdict of the scan. Important If you delete a ComplianceSuite object, then all the associated scans get deleted. When the scan is complete, it generates the result as Custom Resources of the ComplianceCheckResult object. However, the raw results are available in ARF format. These results are stored in a Persistent Volume (PV), which has a Persistent Volume Claim (PVC) associated with the name of the scan. You can programmatically fetch the ComplianceScans events. To generate events for the suite, run the following command: oc get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite> 5.13.6. Viewing the compliance results When the compliance suite reaches the DONE phase, you can view the scan results and possible remediations. 5.13.6.1. ComplianceCheckResult object When you run a scan with a specific profile, several rules in the profiles are verified. For each of these rules, a ComplianceCheckResult object is created, which provides the state of the cluster for a specific rule. Example ComplianceCheckResult object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2 1 Describes the severity of the scan check. 2 Describes the result of the check. The possible values are: PASS: check was successful. FAIL: check was unsuccessful. INFO: check was successful and found something not severe enough to be considered an error. MANUAL: check cannot automatically assess the status and manual check is required. INCONSISTENT: different nodes report different results. ERROR: check run successfully, but could not complete. NOTAPPLICABLE: check did not run as it is not applicable. To get all the check results from a suite, run the following command: oc get compliancecheckresults -l compliance.openshift.io/suite=<suit name> 5.13.6.2. ComplianceRemediation object For a specific check you can have a datastream specified fix. However, if a Kubernetes fix is available, then the Compliance Operator creates a ComplianceRemediation object. Example ComplianceRemediation object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3 1 true indicates the remediation was applied. false indicates the remediation was not applied. 2 Includes the definition of the remediation. 3 Indicates remediation that was previously parsed from an earlier version of the content. The Compliance Operator still retains the outdated objects to give the administrator a chance to review the new remediations before applying them. To get all the remediations from a suite, run the following command: oc get complianceremediations -l compliance.openshift.io/suite=<suite name> To list all failing checks that can be remediated automatically, run the following command: oc get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation' To list all failing checks that can be remediated manually, run the following command: oc get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'
|
[
"oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis",
"oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: openshift-compliance",
"oc create -f namespace-object.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance",
"oc create -f operator-group-object.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"release-0.1\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f subscription-object.yaml",
"oc get csv -n openshift-compliance",
"oc get deploy -n openshift-compliance",
"oc explain scansettings",
"oc explain scansettingbindings",
"oc describe scansettings default -n openshift-compliance",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: pvAccessModes: - ReadWriteOnce 1 rotation: 3 2 size: 1Gi 3 roles: - worker 4 - master 5 scanTolerations: 6 default: - operator: Exists schedule: 0 1 * * * 7",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default-auto-apply namespace: openshift-compliance autoUpdateRemediations: true 1 autoApplyRemediations: true 2 rawResultStorage: pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi schedule: 0 1 * * * roles: - worker - master scanTolerations: default: - operator: Exists",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"oc create -f <file-name>.yaml -n openshift-compliance",
"oc get compliancescan -w -n openshift-compliance",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * *",
"oc create -f rs-workers.yaml",
"oc get scansettings rs-on-workers -n openshift-compliance -o yaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: \"2021-11-19T19:36:36Z\" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: \"48305\" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true",
"oc get -n <namespace> profiles.compliance",
"oc get -n openshift-compliance profiles.compliance",
"NAME AGE ocp4-cis 32m ocp4-cis-node 32m ocp4-e8 32m ocp4-moderate 32m ocp4-moderate-node 32m ocp4-nerc-cip 32m ocp4-nerc-cip-node 32m ocp4-pci-dss 32m ocp4-pci-dss-node 32m rhcos4-e8 32m rhcos4-moderate 32m rhcos4-nerc-cip 32m",
"oc get -n <namespace> -oyaml profiles.compliance <profile name>",
"oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8",
"apiVersion: compliance.openshift.io/v1alpha1 description: |- This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos426smj compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight",
"oc get -n <namespace> -oyaml rules.compliance <rule_name>",
"oc get -n openshift-compliance -oyaml rules.compliance rhcos4-audit-rules-login-events",
"apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos426smj compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion.",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: name: rhcos4 spec: contentImage: quay.io/user/ocp4-openscap-content@sha256:a1749f5150b19a9560a5732fe48a89f07bffc79c0832aa8c49ee5504590ae687 1 contentFile: ssg-rhcos4-ds.xml",
"oc get is -n openshift-compliance",
"NAME IMAGE REPOSITORY TAGS UPDATED openscap-ocp4-ds image-registry.openshift-image-registry.svc:5000/openshift-compliance/openscap-ocp4-ds latest 32 seconds ago",
"oc patch is openscap-ocp4-ds -p '{\"spec\":{\"lookupPolicy\":{\"local\":true}}}' --type=merge imagestream.image.openshift.io/openscap-ocp4-ds patched -n openshift-compliance",
"oc get istag -n openshift-compliance",
"NAME IMAGE REFERENCE UPDATED openscap-ocp4-ds:latest image-registry.openshift-image-registry.svc:5000/openshift-compliance/openscap-ocp4-ds@sha256:46d7ca9b7055fe56ade818ec3e62882cfcc2d27b9bf0d1cbae9f4b6df2710c96 3 minutes ago",
"cat << EOF | oc create -f - apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: name: mybundle spec: contentImage: openscap-ocp4-ds:latest contentFile: ssg-rhcos4-ds.xml EOF",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: name: rhcos4 spec: contentImage: quay.io/complianceascode/ocp4:latest 1 contentFile: ssg-rhcos4-ds.xml 2",
"oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4",
"oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive",
"oc create -n openshift-compliance -f new-profile-node.yaml 1",
"tailoredprofile.compliance.openshift.io/nist-moderate-modified created",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default",
"oc create -n openshift-compliance -f new-scansettingbinding.yaml",
"scansettingbinding.compliance.openshift.io/nist-moderate-modified created",
"oc get compliancesuites nist-moderate-modified -o json | jq '.status.scanStatuses[].resultsStorage' { \"name\": \"rhcos4-moderate-worker\", \"namespace\": \"openshift-compliance\" } { \"name\": \"rhcos4-moderate-master\", \"namespace\": \"openshift-compliance\" }",
"oc get pvc -n openshift-compliance rhcos4-moderate-worker",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m",
"apiVersion: \"v1\" kind: Pod metadata: name: pv-extract spec: containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi8/ubi command: [\"sleep\", \"3000\"] volumeMounts: - mountPath: \"/workers-scan-results\" name: workers-scan-vol volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker",
"oc cp pv-extract:/workers-scan-results .",
"oc delete pod pv-extract",
"oc get compliancecheckresults -l compliance.openshift.io/suite=example-compliancesuite",
"oc get compliancecheckresults -l compliance.openshift.io/scan=example-compliancescan",
"oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'",
"oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'",
"spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied",
"echo \"net.ipv4.conf.all.accept_redirects%3D0\" | python3 -c \"import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))\"",
"net.ipv4.conf.all.accept_redirects=0",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.23.3+d99c04f ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.23.3+d99c04f ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.23.3+d99c04f ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.23.3+d99c04f ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.23.3+d99c04f",
"oc label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>=",
"node/ip-10-0-166-81.us-east-2.compute.internal labeled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: \"\"",
"oc get mcp -w",
"oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":true}}' --type=merge",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2020-09-10T10:12:54Z\" generation: 2 name: cluster resourceVersion: \"363096\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=",
"oc get complianceremediations -lcomplianceoperator.openshift.io/outdated-remediation=",
"NAME STATE workers-scan-no-empty-passwords Outdated",
"oc patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{\"op\":\"remove\", \"path\":/spec/outdated}]'",
"oc get complianceremediations workers-scan-no-empty-passwords",
"NAME STATE workers-scan-no-empty-passwords Applied",
"oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects -p '{\"spec\":{\"apply\":false}}' --type=merge",
"oc get remediation one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: \"2022-01-05T19:52:27Z\" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: \"84820\" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied",
"oc patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{\"spec\":{\"apply\":false}}' --type=merge",
"oc get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master",
"NAME AGE compliance-operator-kubelet-master 2m34s",
"oc edit KubeletConfig compliance-operator-kubelet-master",
"oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc create configmap <scan_name> --from-file=tailoring.xml=/path/to/the/tailoringFile.xml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest debug: true tailoringConfigMap: name: <scan_name> nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=",
"oc get mc",
"75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'",
"oc annotate compliancesuites/<suite-_name> compliance.openshift.io/apply-remediations=",
"oc annotate compliancesuites/<suite_name> compliance.openshift.io/remove-outdated=",
"allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret",
"oc create -f restricted-adjusted-compliance.yaml",
"securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created",
"oc get scc restricted-adjusted-compliance",
"NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]",
"oc get events -n openshift-compliance",
"oc describe compliancescan/<scan_name>",
"oc logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == \"profilebundlectrl\")'",
"date -d @1596184628.955853 --utc",
"oc get profilebundle.compliance",
"oc get profile.compliance",
"oc logs -lprofile-bundle=ocp4 -c profileparser",
"oc get deployments,pods -lprofile-bundle=ocp4",
"oc logs pods/<pod-name>",
"oc describe pod/<pod-name> -c profileparser",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true For each role, a separate scan will be created pointing to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created",
"oc get cronjobs",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m",
"oc get cm -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=",
"oc get pvc -lcompliance.openshift.io/scan-name=<scan_name>",
"oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels",
"NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner At this point, the scan proceeds to the Running phase.",
"oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod",
"Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version=\"1.0\" encoding=\"UTF-8\"?>",
"oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker",
"NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium",
"oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker",
"NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied",
"oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=",
"oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{\"spec\":{\"apply\":true}}' --type=merge",
"oc get mc | grep 75-",
"75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s",
"oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements",
"Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:",
"oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=",
"oc get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod",
"NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium",
"oc logs -l workload=<workload_name> -c <container_name>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance spec: contentFile: ssg-ocp4-ds.xml 1 contentImage: quay.io/complianceascode/ocp4:latest 2 status: dataStreamStatus: VALID 3",
"apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: \"YYYY-MM-DDTMM:HH:SSZ\" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: \"<version number>\" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile>",
"apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4",
"compliance.openshift.io/product-type: <scan type>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: <name of the scan> autoApplyRemediations: false 1 autoUpdateRemediations: false 2 schedule: \"0 1 * * *\" 3 rawResultStorage: size: \"2Gi\" 4 rotation: 10 5 roles: 6 - worker - master",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"oc get compliancesuites",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name of the scan> spec: autoApplyRemediations: false 1 schedule: \"0 1 * * *\" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" nodeSelector: node-role.kubernetes.io/worker: \"\" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT",
"oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name of the scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest 3 rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" 4 nodeSelector: 5 node-role.kubernetes.io/worker: \"\" status: phase: DONE 6 result: NON-COMPLIANT 7",
"get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2",
"get compliancecheckresults -l compliance.openshift.io/suite=<suit name>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3",
"get complianceremediations -l compliance.openshift.io/suite=<suite name>",
"get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'",
"get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/security_and_compliance/compliance-operator
|
Chapter 1. Planning a GFS2 file system deployment
|
Chapter 1. Planning a GFS2 file system deployment The Red Hat Global File System 2 (GFS2) file system is a 64-bit symmetric cluster file system which provides a shared name space and manages coherency between multiple nodes sharing a common block device. A GFS2 file system is intended to provide a feature set which is as close as possible to a local file system, while at the same time enforcing full cluster coherency between nodes. To achieve this, the nodes employ a cluster-wide locking scheme for file system resources. This locking scheme uses communication protocols such as TCP/IP to exchange locking information. In a few cases, the Linux file system API does not allow the clustered nature of GFS2 to be totally transparent; for example, programs using POSIX locks in GFS2 should avoid using the GETLK function since, in a clustered environment, the process ID may be for a different node in the cluster. In most cases however, the functionality of a GFS2 file system is identical to that of a local file system. The Red Hat Enterprise Linux (RHEL) Resilient Storage Add-On provides GFS2, and it depends on the RHEL High Availability Add-On to provide the cluster management required by GFS2. The gfs2.ko kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes. To get the best performance from GFS2, it is important to take into account the performance considerations which stem from the underlying design. Just like a local file system, GFS2 relies on the page cache in order to improve performance by local caching of frequently used data. In order to maintain coherency across the nodes in the cluster, cache control is provided by the glock state machine. Important Make sure that your deployment of the Red Hat High Availability Add-On meets your needs and can be supported. Consult with an authorized Red Hat representative to verify your configuration prior to deployment. 1.1. GFS2 file system format version 1802 As of Red Hat Enterprise Linux 9, GFS2 file systems are created with format version 1802. Format version 1802 enables the following features: Extended attributes in the trusted namespace ("trusted.* xattrs") are recognized by gfs2 and gfs2-utils . The rgrplvb option is active by default. This allows gfs2 to attach updated resource group data to DLM lock requests, so the node acquiring the lock does not need to update the resource group information from disk. This improves performance in some cases. Filesystems created with the new format version will not be able to be mounted under earlier RHEL versions and older versions of the fsck.gfs2 utility will not be able to check them. Users can create a file system with the older format version by running the mkfs.gfs2 command with the option -o format=1801 . Users can upgrade the format version of an older file system running tunegfs2 -r 1802 device on an unmounted file system. Downgrading the format version is not supported. 1.2. Key GFS2 parameters to determine There are a number of key GFS2 parameters you should plan for before you install and configure a GFS2 file system. GFS2 nodes Determine which nodes in the cluster will mount the GFS2 file systems. Number of file systems Determine how many GFS2 file systems to create initially. More file systems can be added later. File system name Each GFS2 file system should have a unique name. This name is usually the same as the LVM logical volume name and is used as the DLM lock table name when a GFS2 file system is mounted. For example, this guide uses file system names mydata1 and mydata2 in some example procedures. Journals Determine the number of journals for your GFS2 file systems. GFS2 requires one journal for each node in the cluster that needs to mount the file system. For example, if you have a 16-node cluster but need to mount only the file system from two nodes, you need only two journals. GFS2 allows you to add journals dynamically at a later point with the gfs2_jadd utility as additional servers mount a file system. Storage devices and partitions Determine the storage devices and partitions to be used for creating logical volumes (using lvmlockd ) in the file systems. Time protocol Make sure that the clocks on the GFS2 nodes are synchronized. It is recommended that you use the Precision Time Protocol (PTP) or, if necessary for your configuration, the Network Time Protocol (NTP) software provided with your Red Hat Enterprise Linux distribution. The system clocks in GFS2 nodes must be within a few minutes of each other to prevent unnecessary inode time stamp updating. Unnecessary inode time stamp updating severely impacts cluster performance. Note You may see performance problems with GFS2 when many create and delete operations are issued from more than one node in the same directory at the same time. If this causes performance problems in your system, you should localize file creation and deletions by a node to directories specific to that node as much as possible. 1.3. GFS2 support considerations To be eligible for support from Red Hat for a cluster running a GFS2 file system, you must take into account the support policies for GFS2 file systems. Note For full information about Red Hat's support policies, requirements, and limitations for RHEL High Availability clusters, see Support Policies for RHEL High Availability Clusters . 1.3.1. Maximum file system and cluster size The following table summarizes the current maximum file system size and number of nodes that GFS2 supports. Table 1.1. GFS2 Support Limits Parameter Maximum Number of nodes 16 (x86, Power8 on PowerVM) 4 (s390x under z/VM) File system size 100TB on all supported architectures GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. If your system requires larger GFS2 file systems than are currently supported, contact your Red Hat service representative. When determining the size of your file system, you should consider your recovery needs. Running the fsck.gfs2 command on a very large file system can take a long time and consume a large amount of memory. Additionally, in the event of a disk or disk subsystem failure, recovery time is limited by the speed of your backup media. For information about the amount of memory the fsck.gfs2 command requires, see Determining required memory for running fsck.gfs2 . 1.3.2. Minimum cluster size Although a GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, Red Hat does not support the use of GFS2 as a single-node file system, with the following exceptions: Red Hat supports single-node GFS2 file systems for mounting snapshots of cluster file systems as might be needed, for example, for backup purposes. A single-node cluster mounting GFS2 file systems (which uses DLM) is supported for the purposes of a secondary-site Disaster Recovery (DR) node. This exception is for DR purposes only and not for transferring the main cluster workload to the secondary site. For example, copying off the data from the filesystem mounted on the secondary site while the primary site is offline is supported. However, migrating a workload from the primary site directly to a single-node cluster secondary site is unsupported. If the full work load needs to be migrated to the single-node secondary site then the secondary site must be the same size as the primary site. Red Hat recommends that when you mount a GFS2 file system in a single-node cluster you specify the errors=panic mount option so that the single-node cluster will panic when a GFS2 withdraw occurs since the single-node cluster will not be able to fence itself when encountering file system errors. Red Hat supports a number of high-performance single-node file systems that are optimized for single node and thus have generally lower overhead than a cluster file system. Red Hat recommends using these file systems in preference to GFS2 in cases where only a single node needs to mount the file system. For information about the file systems that Red Hat Enterprise Linux 9 supports, see Managing file systems . 1.3.3. Shared storage considerations While a GFS2 file system may be used outside of LVM, Red Hat supports only GFS2 file systems that are created on a shared LVM logical volume. When you configure a GFS2 file system as a cluster file system, you must ensure that all nodes in the cluster have access to the shared storage. Asymmetric cluster configurations in which some nodes have access to the shared storage and others do not are not supported. This does not require that all nodes actually mount the GFS2 file system itself. 1.4. GFS2 formatting considerations To format your GFS2 file system to optimize performance, you should take these recommendations into account. Important Make sure that your deployment of the Red Hat High Availability Add-On meets your needs and can be supported. Consult with an authorized Red Hat representative to verify your configuration prior to deployment. File System Size: Smaller Is Better GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. However, the current supported maximum size of a GFS2 file system for 64-bit hardware is 100TB. Note that even though GFS2 large file systems are possible, that does not mean they are recommended. The rule of thumb with GFS2 is that smaller is better: it is better to have 10 1TB file systems than one 10TB file system. There are several reasons why you should keep your GFS2 file systems small: Less time is required to back up each file system. Less time is required if you need to check the file system with the fsck.gfs2 command. Less memory is required if you need to check the file system with the fsck.gfs2 command. In addition, fewer resource groups to maintain mean better performance. Of course, if you make your GFS2 file system too small, you might run out of space, and that has its own consequences. You should consider your own use cases before deciding on a size. Block Size: Default (4K) Blocks Are Preferred The mkfs.gfs2 command attempts to estimate an optimal block size based on device topology. In general, 4K blocks are the preferred block size because 4K is the default page size (memory) for Red Hat Enterprise Linux. Unlike some other file systems, GFS2 does most of its operations using 4K kernel buffers. If your block size is 4K, the kernel has to do less work to manipulate the buffers. It is recommended that you use the default block size, which should yield the highest performance. You may need to use a different block size only if you require efficient storage of many very small files. Journal Size: Default (128MB) Is Usually Optimal When you run the mkfs.gfs2 command to create a GFS2 file system, you may specify the size of the journals. If you do not specify a size, it will default to 128MB, which should be optimal for most applications. Some system administrators might think that 128MB is excessive and be tempted to reduce the size of the journal to the minimum of 8MB or a more conservative 32MB. While that might work, it can severely impact performance. Like many journaling file systems, every time GFS2 writes metadata, the metadata is committed to the journal before it is put into place. This ensures that if the system crashes or loses power, you will recover all of the metadata when the journal is automatically replayed at mount time. However, it does not take much file system activity to fill an 8MB journal, and when the journal is full, performance slows because GFS2 has to wait for writes to the storage. It is generally recommended to use the default journal size of 128MB. If your file system is very small (for example, 5GB), having a 128MB journal might be impractical. If you have a larger file system and can afford the space, using 256MB journals might improve performance. Size and Number of Resource Groups When a GFS2 file system is created with the mkfs.gfs2 command, it divides the storage into uniform slices known as resource groups. It attempts to estimate an optimal resource group size (ranging from 32MB to 2GB). You can override the default with the -r option of the mkfs.gfs2 command. Your optimal resource group size depends on how you will use the file system. Consider how full it will be and whether or not it will be severely fragmented. You should experiment with different resource group sizes to see which results in optimal performance. It is a best practice to experiment with a test cluster before deploying GFS2 into full production. If your file system has too many resource groups, each of which is too small, block allocations can waste too much time searching tens of thousands of resource groups for a free block. The more full your file system, the more resource groups that will be searched, and every one of them requires a cluster-wide lock. This leads to slow performance. If, however, your file system has too few resource groups, each of which is too big, block allocations might contend more often for the same resource group lock, which also impacts performance. For example, if you have a 10GB file system that is carved up into five resource groups of 2GB, the nodes in your cluster will fight over those five resource groups more often than if the same file system were carved into 320 resource groups of 32MB. The problem is exacerbated if your file system is nearly full because every block allocation might have to look through several resource groups before it finds one with a free block. GFS2 tries to mitigate this problem in two ways: First, when a resource group is completely full, it remembers that and tries to avoid checking it for future allocations until a block is freed from it. If you never delete files, contention will be less severe. However, if your application is constantly deleting blocks and allocating new blocks on a file system that is mostly full, contention will be very high and this will severely impact performance. Second, when new blocks are added to an existing file (for example, by appending) GFS2 will attempt to group the new blocks together in the same resource group as the file. This is done to increase performance: on a spinning disk, seek operations take less time when they are physically close together. The worst case scenario is when there is a central directory in which all the nodes create files because all of the nodes will constantly fight to lock the same resource group. 1.5. Considerations for GFS2 in a cluster When determining the number of nodes that your system will contain, note that there is a trade-off between high availability and performance. With a larger number of nodes, it becomes increasingly difficult to make workloads scale. For that reason, Red Hat does not support using GFS2 for cluster file system deployments greater than 16 nodes. Deploying a cluster file system is not a "drop in" replacement for a single node deployment. Red Hat recommends that you allow a period of around 8-12 weeks of testing on new installations in order to test the system and ensure that it is working at the required performance level. During this period, any performance or functional issues can be worked out and any queries should be directed to the Red Hat support team. Red Hat recommends that customers considering deploying clusters have their configurations reviewed by Red Hat support before deployment to avoid any possible support issues later on. 1.6. Hardware considerations Take the following hardware considerations into account when deploying a GFS2 file system. Use higher quality storage options GFS2 can operate on cheaper shared storage options, such as iSCSI or Fibre Channel over Ethernet (FCoE), but you will get better performance if you buy higher quality storage with larger caching capacity. Red Hat performs most quality, sanity, and performance tests on SAN storage with Fibre Channel interconnect. As a general rule, it is always better to deploy something that has been tested first. Test network equipment before deploying Higher quality, faster network equipment makes cluster communications and GFS2 run faster with better reliability. However, you do not have to purchase the most expensive hardware. Some of the most expensive network switches have problems passing multicast packets, which are used for passing fcntl locks (flocks), whereas cheaper commodity network switches are sometimes faster and more reliable. Red Hat recommends trying equipment before deploying it into full production.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_gfs2_file_systems/assembly_planning-gfs2-deployment-configuring-gfs2-file-systems
|
Preface
|
Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/pr01
|
23.17. Devices
|
23.17. Devices This set of XML elements are all used to describe devices provided to the guest virtual machine domain. All of the devices below are indicated as children of the main <devices> element. The following virtual devices are supported: virtio-scsi-pci - PCI bus storage device virtio-blk-pci - PCI bus storage device virtio-net-pci - PCI bus network device also known as virtio-net virtio-serial-pci - PCI bus input device virtio-balloon-pci - PCI bus memory balloon device virtio-rng-pci - PCI bus virtual random number generator device Important If a virtio device is created where the number of vectors is set to a value higher than 32, the device behaves as if it was set to a zero value on Red Hat Enterprise Linux 6, but not on Enterprise Linux 7. The resulting vector setting mismatch causes a migration error if the number of vectors on any virtio device on either platform is set to 33 or higher. It is, therefore, not recommended to set the vector value to be greater than 32. All virtio devices with the exception of virtio-balloon-pci and virtio-rng-pci will accept a vector argument. ... <devices> <emulator>/usr/libexec/qemu-kvm</emulator> </devices> ... Figure 23.26. Devices - child elements The contents of the <emulator> element specify the fully qualified path to the device model emulator binary. The capabilities XML specifies the recommended default emulator to use for each particular domain type or architecture combination. 23.17.1. Hard Drives, Floppy Disks, and CD-ROMs This section of the domain XML specifies any device that looks like a disk, including any floppy disk, hard disk, CD-ROM, or paravirtualized driver that is specified in the <disk> element. <disk type='network'> <driver name="qemu" type="raw" io="threads" ioeventfd="on" event_idx="off"/> <source protocol="sheepdog" name="image_name"> <host name="hostname" port="7000"/> </source> <target dev="hdb" bus="ide"/> <boot order='1'/> <transient/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> Figure 23.27. Devices - Hard drives, floppy disks, CD-ROMs Example <disk type='network'> <driver name="qemu" type="raw"/> <source protocol="rbd" name="image_name2"> <host name="hostname" port="7000"/> </source> <target dev="hdd" bus="ide"/> <auth username='myuser'> <secret type='ceph' usage='mypassid'/> </auth> </disk> Figure 23.28. Devices - Hard drives, floppy disks, CD-ROMs Example 2 <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol="http" name="url_path"> <host name="hostname" port="80"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> Figure 23.29. Devices - Hard drives, floppy disks, CD-ROMs Example 3 <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol="https" name="url_path"> <host name="hostname" port="443"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol="ftp" name="url_path"> <host name="hostname" port="21"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> Figure 23.30. Devices - Hard drives, floppy disks, CD-ROMs Example 4 <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol="ftps" name="url_path"> <host name="hostname" port="990"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol="tftp" name="url_path"> <host name="hostname" port="69"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='block' device='lun'> <driver name='qemu' type='raw'/> <source dev='/dev/sda'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='3' unit='0'/> </disk> Figure 23.31. Devices - Hard drives, floppy disks, CD-ROMs Example 5 <disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/sda'/> <geometry cyls='16383' heads='16' secs='63' trans='lba'/> <blockio logical_block_size='512' physical_block_size='4096'/> <target dev='hda' bus='ide'/> </disk> <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='blk-pool0' volume='blk-pool0-vol0'/> <target dev='hda' bus='ide'/> </disk> <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/2'> <host name='example.com' port='3260'/> </source> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk> Figure 23.32. Devices - Hard drives, floppy disks, CD-ROMs Example 6 <disk type='network' device='lun'> <driver name='qemu' type='raw'/> <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/1'> iqn.2013-07.com.example:iscsi-pool <host name='example.com' port='3260'/> </source> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='sda' bus='scsi'/> </disk> <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='iscsi-pool' volume='unit:0:0:1' mode='host'/> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk> Figure 23.33. Devices - Hard drives, floppy disks, CD-ROMs Example 7 <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='iscsi-pool' volume='unit:0:0:2' mode='direct'/> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/tmp/test.img' startupPolicy='optional'/> <target dev='sdb' bus='scsi'/> <readonly/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source file='/var/lib/libvirt/images/discard1.img'/> <target dev='vdb' bus='virtio'/> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </disk> </devices> ... Figure 23.34. Devices - Hard drives, floppy disks, CD-ROMs Example 8 23.17.1.1. Disk element The <disk> element is the main container for describing disks. The attribute type can be used with the <disk> element. The following types are allowed: file block dir network For more information, see the libvirt upstream pages . 23.17.1.2. Source element Represents the disk source. The disk source depends on the disk type attribute, as follows: <file> - The file attribute specifies the fully-qualified path to the file in which the disk is located. <block> - The dev attribute specifies the fully-qualified path to the host device that serves as the disk. <dir> - The dir attribute specifies the fully-qualified path to the directory used as the disk. <network> - The protocol attribute specifies the protocol used to access the requested image. Possible values are: nbd , isci , rbd , sheepdog , and gluster . If the protocol attribute is rbd , sheepdog , or gluster , an additional attribute, name is mandatory. This attribute specifies which volume and image will be used. If the protocol attribute is nbd , the name attribute is optional. If the protocol attribute is isci , the name attribute may include a logical unit number, separated from the target's name with a slash. For example: iqn.2013-07.com.example:iscsi-pool/1. If not specified, the default LUN is zero. <volume> - The underlying disk source is represented by the pool and volume attributes. <pool> - The name of the storage pool (managed by libvirt ) where the disk source resides. <volume> - The name of the storage volume (managed by libvirt ) used as the disk source. The value for the volume attribute is the output from the Name column of a virsh vol-list [pool-name] When the disk type is network , the source may have zero or more host sub-elements used to specify the host physical machines to connect, including: type='dir' and type='network' . For a file disk type which represents a CD-ROM or floppy (the device attribute), it is possible to define the policy for what to do with the disk if the source file is not accessible. This is done by setting the startupPolicy attribute with one of the following values: mandatory causes a failure if missing for any reason. This is the default setting. requisite causes a failure if missing on boot up, drops if missing on migrate, restore, or revert. optional drops if missing at any start attempt. 23.17.1.3. Mirror element This element is present if the hypervisor has started a BlockCopy operation, where the <mirror> location in the attribute file will eventually have the same contents as the source, and with the file format in attribute format (which might differ from the format of the source). If an attribute ready is present, then it is known the disk is ready to pivot; otherwise, the disk is probably still copying. For now, this element only valid in output; it is ignored on input. 23.17.1.4. Target element The <target> element controls the bus or device under which the disk is exposed to the guest virtual machine operating system. The dev attribute indicates the logical device name. The actual device name specified is not guaranteed to map to the device name in the guest virtual machine operating system. The optional bus attribute specifies the type of disk device to emulate; possible values are driver-specific, with typical values being ide , scsi , virtio , kvm , usb or sata . If omitted, the bus type is inferred from the style of the device name. For example, a device named 'sda' will typically be exported using a SCSI bus. The optional attribute tray indicates the tray status of the removable disks (for example, CD-ROM or Floppy disk), where the value can be either open or closed . The default setting is closed . 23.17.1.5. iotune element The optional <iotune> element provides the ability to provide additional per-device I/O tuning, with values that can vary for each device (contrast this to the blkiotune element, which applies globally to the domain). This element has the following optional sub-elements (note that any sub-element not specified or at all or specified with a value of 0 implies no limit): <total_bytes_sec> - The total throughput limit in bytes per second. This element cannot be used with <read_bytes_sec> or <write_bytes_sec> . <read_bytes_sec> - The read throughput limit in bytes per second. <write_bytes_sec> - The write throughput limit in bytes per second. <total_iops_sec> - The total I/O operations per second. This element cannot be used with <read_iops_sec> or <write_iops_sec> . <read_iops_sec> - The read I/O operations per second. <write_iops_sec> - The write I/O operations per second. 23.17.1.6. Driver element The optional <driver> element allows specifying further details related to the hypervisor driver that is used to provide the disk. The following options may be used: If the hypervisor supports multiple back-end drivers, the name attribute selects the primary back-end driver name, while the optional type attribute provides the sub-type. The optional cache attribute controls the cache mechanism. Possible values are: default , none , writethrough , writeback , directsync (similar to writethrough , but it bypasses the host physical machine page cache) and unsafe (host physical machine may cache all disk I/O, and sync requests from guest virtual machines are ignored). The optional error_policy attribute controls how the hypervisor behaves on a disk read or write error. Possible values are stop , report , ignore , and enospace . The default setting of error_policy is report . There is also an optional rerror_policy that controls behavior for read errors only. If no rerror_policy is given, error_policy is used for both read and write errors. If rerror_policy is given, it overrides the error_policy for read errors. Also note that enospace is not a valid policy for read errors, so if error_policy is set to enospace and no rerror_policy is given, the read error default setting, report will be used. The optional io attribute controls specific policies on I/O; kvm guest virtual machines support threads and native . The optional ioeventfd attribute allows users to set domain I/O asynchronous handling for virtio disk devices. The default is determined by the hypervisor. Accepted values are on and off . Enabling this allows the guest virtual machine to be executed while a separate thread handles I/O. Typically, guest virtual machines experiencing high system CPU utilization during I/O will benefit from this. On the other hand, an overloaded host physical machine can increase guest virtual machine I/O latency. However, it is recommended that you do not change the default setting, and allow the hypervisor to determine the setting. Note The ioeventfd attribute is included in the <driver> element of the disk XML section and also the <driver> element of the device XML section. In the former case, it influences the virtIO disk, and in the latter case the SCSI disk. The optional event_idx attribute controls some aspects of device event processing and can be set to either on or off . If set to on , it will reduce the number of interrupts and exits for the guest virtual machine. The default is determined by the hypervisor and the default setting is on . When this behavior is not required, setting off forces the feature off. However, it is highly recommended that you not change the default setting, and allow the hypervisor to dictate the setting. The optional copy_on_read attribute controls whether to copy the read backing file into the image file. The accepted values can be either on or <off> . copy-on-read avoids accessing the same backing file sectors repeatedly, and is useful when the backing file is over a slow network. By default copy-on-read is off . The discard='unmap' can be set to enable discard support. The same line can be replaced with discard='ignore' to disable. discard='ignore' is the default setting. 23.17.1.7. Additional Device Elements The following attributes may be used within the device element: <boot> - Specifies that the disk is bootable. Additional boot values <order> - Determines the order in which devices will be tried during boot sequence. <per-device> Boot elements cannot be used together with general boot elements in the BIOS boot loader section. <encryption> - Specifies how the volume is encrypted. <readonly> - Indicates the device cannot be modified by the guest virtual machine virtual machine. This setting is the default for disks with attribute <device='cdrom'> . <shareable> Indicates the device is expected to be shared between domains (as long as hypervisor and operating system support this). If shareable is used, cache='no' should be used for that device. <transient> - Indicates that changes to the device contents should be reverted automatically when the guest virtual machine exits. With some hypervisors, marking a disk transient prevents the domain from participating in migration or snapshots. <serial> - Specifies the serial number of guest virtual machine's hard drive. For example, <serial> WD-WMAP9A966149 </serial> . <wwn> - Specifies the World Wide Name (WWN) of a virtual hard disk or CD-ROM drive. It must be composed of 16 hexadecimal digits. <vendor> - Specifies the vendor of a virtual hard disk or CD-ROM device. It must not be longer than 8 printable characters. <product> - Specifies the product of a virtual hard disk or CD-ROM device. It must not be longer than 16 printable characters <host> - Supports the following attributes: name - specifies the host name port - specifies the port number transport - specifies the transport type socket - specifies the path to the socket The meaning of this element and the number of the elements depend on the protocol attribute as shown in Additional host attributes based on the protocol Additional host attributes based on the protocol nbd - Specifies a server running nbd-server and may only be used for only one host physical machine. The default port for this protcol is 10809 . rbd - Monitors servers of RBD type and may be used for one or more host physical machines. sheepdog - Specifies one of the sheepdog servers (default is localhost:7000) and can be used with one or none of the host physical machines. gluster - Specifies a server running a glusterd daemon and may be used for only only one host physical machine. The valid values for transport attribute are tcp , rdma or unix . If nothing is specified, tcp is assumed. If transport is unix , the socket attribute specifies path to unix socket. <address> - Ties the disk to a given slot of a controller. The actual <controller> device can often be inferred but it can also be explicitly specified. The type attribute is mandatory, and is typically pci or drive . For a pci controller, additional attributes for bus , slot , and function must be present, as well as optional domain and multifunction . multifunction defaults to off . For a drive controller, additional attributes controller , bus , target , and unit are available, each with a default setting of 0 . auth - Provides the authentication credentials needed to access the source. It includes a mandatory attribute username , which identifies the user name to use during authentication, as well as a sub-element secret with mandatory attribute type . geometry - Provides the ability to override geometry settings. This mostly useful for S390 DASD-disks or older DOS-disks. It can have the following parameters: cyls - Specifies the number of cylinders. heads - Specifies the number of heads. secs - Specifies the number of sectors per track. trans - Specifies the BIOS-Translation-Modes and can have the following values: none , lba or auto . blockio - Allows the block device to be overridden with any of the block device properties listed below: blockio options logical_block_size - Reports to the guest virtual machine operating system and describes the smallest units for disk I/O. physical_block_size - Reports to the guest virtual machine operating system and describes the disk's hardware sector size, which can be relevant for the alignment of disk data. 23.17.2. Device Addresses Many devices have an optional <address> sub-element to describe where the device placed on the virtual bus is presented to the guest virtual machine. If an address (or any optional attribute within an address) is omitted on input, libvirt will generate an appropriate address; but an explicit address is required if more control over layout is required. See below for device examples including an address element. Every address has a mandatory attribute type that describes which bus the device is on. The choice of which address to use for a given device is constrained in part by the device and the architecture of the guest virtual machine. For example, a disk device uses type='disk' , while a console device would use type='pci' on the 32-bit AMD and Intel, or AMD64 and Intel 64, guest virtual machines, or type='spapr-vio' on PowerPC64 pseries guest virtual machines. Each address <type> has additional optional attributes that control where on the bus the device will be placed. The additional attributes are as follows: type='pci' - PCI addresses have the following additional attributes: domain (a 2-byte hex integer, not currently used by KVM) bus (a hex value between 0 and 0xff, inclusive) slot (a hex value between 0x0 and 0x1f, inclusive) function (a value between 0 and 7, inclusive) Also available is the multi-function attribute, which controls turning on the multi-function bit for a particular slot or function in the PCI control register. This multi-function attribute defaults to 'off' , but should be set to 'on' for function 0 of a slot that will have multiple functions used. type='drive' - drive addresses have the following additional attributes: controller - (a 2-digit controller number) bus - (a 2-digit bus number) target - (a 2-digit bus number) unit - (a 2-digit unit number on the bus) type='virtio-serial' - Each virtio-serial address has the following additional attributes: controller - (a 2-digit controller number) bus - (a 2-digit bus number) slot - (a 2-digit slot within the bus) type='ccid' - A CCID address, used for smart-cards, has the following additional attributes: bus - (a 2-digit bus number) slot - (a 2-digit slot within the bus) type='usb' - USB addresses have the following additional attributes: bus - (a hex value between 0 and 0xfff, inclusive) port - (a dotted notation of up to four octets, such as 1.2 or 2.1.3.1) type='spapr-vio' - On PowerPC pseries guest virtual machines, devices can be assigned to the SPAPR-VIO bus. It has a flat 64-bit address space; by convention, devices are generally assigned at a non-zero multiple of 0x1000, but other addresses are valid and permitted by libvirt . The additional reg attribute, which determines the hex value address of the starting register, can be assigned to this attribute. 23.17.3. Controllers Depending on the guest virtual machine architecture, it is possible to assign many virtual devices to a single bus. Under normal circumstances libvirt can automatically infer which controller to use for the bus. However, it may be necessary to provide an explicit <controller> element in the guest virtual machine XML: ... <devices> <controller type='ide' index='0'/> <controller type='virtio-serial' index='0' ports='16' vectors='4'/> <controller type='virtio-serial' index='1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> <controller type='scsi' index='0' model='virtio-scsi' num_queues='8'/> </controller> ... </devices> ... Figure 23.35. Controller Elements Each controller has a mandatory attribute type , which must be one of "ide", "fdc", "scsi", "sata", "usb", "ccid", or "virtio-serial" , and a mandatory attribute index which is the decimal integer describing in which order the bus controller is encountered (for use in controller attributes of address elements). The "virtio-serial" controller has two additional optional attributes, ports and vectors , which control how many devices can be connected through the controller. A <controller type='scsi'> has an optional attribute model , which is one of "auto", "buslogic", "ibmvscsi", "lsilogic", "lsias1068", "virtio-scsi or "vmpvscsi" . The <controller type='scsi'> also has an attribute num_queues which enables multi-queue support for the number of queues specified. In addition, a ioeventfd attribute can be used, which specifies whether the controller should use asynchronous handling on the SCSI disk. Accepted values are "on" and "off". A "usb" controller has an optional attribute model , which is one of "piix3-uhci", "piix4-uhci", "ehci", "ich9-ehci1", "ich9-uhci1", "ich9-uhci2", "ich9-uhci3", "vt82c686b-uhci", "pci-ohci" or "nec-xhci" . Additionally, if the USB bus needs to be explicitly disabled for the guest virtual machine, model='none' may be used. The PowerPC64 "spapr-vio" addresses do not have an associated controller. For controllers that are themselves devices on a PCI or USB bus, an optional sub-element address can specify the exact relationship of the controller to its master bus, with semantics given above. USB companion controllers have an optional sub-element master to specify the exact relationship of the companion to its master controller. A companion controller is on the same bus as its master, so the companion index value should be equal. ... <devices> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0' bus='0' slot='4' function='7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0' bus='0' slot='4' function='0' multifunction='on'/> </controller> ... </devices> ... Figure 23.36. Devices - controllers - USB 23.17.4. Device Leases When using a lock manager, you have the option to record device leases against a guest virtual machine. The lock manager will ensure that the guest virtual machine does not start unless the leases can be acquired. When configured using conventional management tools, the following section of the domain XML is affected: ... <devices> ... <lease> <lockspace>somearea</lockspace> <key>somekey</key> <target path='/some/lease/path' offset='1024'/> </lease> ... </devices> ... Figure 23.37. Devices - device leases The lease section can have the following arguments: lockspace - An arbitrary string that identifies lockspace within which the key is held. Lock managers may impose extra restrictions on the format, or length of the lockspace name. key - An arbitrary string that uniquely identifies the lease to be acquired. Lock managers may impose extra restrictions on the format, or length of the key. target - The fully qualified path of the file associated with the lockspace. The offset specifies where the lease is stored within the file. If the lock manager does not require a offset, set this value to 0 . 23.17.5. Host Physical Machine Device Assignment 23.17.5.1. USB / PCI devices The host physical machine's USB and PCI devices can be passed through to the guest virtual machine using the hostdev element, by modifying the host physical machine using a management tool, configure the following section of the domain XML file: ... <devices> <hostdev mode='subsystem' type='usb'> <source startupPolicy='optional'> <vendor id='0x1234'/> <product id='0xbeef'/> </source> <boot order='2'/> </hostdev> </devices> ... Figure 23.38. Devices - Host physical machine device assignment Alternatively, the following can also be done: ... <devices> <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address bus='0x06' slot='0x02' function='0x0'/> </source> <boot order='1'/> <rom bar='on' file='/etc/fake/boot.bin'/> </hostdev> </devices> ... Figure 23.39. Devices - Host physical machine device assignment alternative Alternatively, the following can also be done: ... <devices> <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host0'/> <address type='scsi' bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> </devices> .. Figure 23.40. Devices - host physical machine scsi device assignment The components of this section of the domain XML are as follows: Table 23.16. Host physical machine device assignment elements Parameter Description hostdev This is the main element for describing host physical machine devices. It accepts the following options: mode - the value is always subsystem for USB and PCI devices. type - usb for USB devices and pci for PCI devices. managed - Toggles the Managed mode of the device: When set to yes for a PCI device, it attaches to the guest machine and detaches from the guest machine and re-attaches to the host machine as necessary. managed='yes' is recommended for general use of device assignment. When set to no or omitted for PCI and for USB devices, the device stays attached to the guest. To make the device available to the host, the user must use the argument virNodeDeviceDettach or the virsh nodedev-dettach command before starting the guest or hot plugging the device. In addition, they must use virNodeDeviceReAttach or virsh nodedev-reattach after hot-unplugging the device or stopping the guest. managed='no' is mainly recommended for devices that are intended to be dedicated to a specific guest. source Describes the device as seen from the host physical machine. The USB device can be addressed by vendor or product ID using the vendor and product elements or by the device's address on the host physical machines using the address element. PCI devices on the other hand can only be described by their address. Note that the source element of USB devices may contain a startupPolicy attribute which can be used to define a rule for what to do if the specified host physical machine USB device is not found. The attribute accepts the following values: mandatory - Fails if missing for any reason (the default). requisite - Fails if missing on boot up, drops if missing on migrate/restore/revert. optional - Drops if missing at any start attempt. vendor, product These elements each have an id attribute that specifies the USB vendor and product ID. The IDs can be given in decimal, hexadecimal (starting with 0x) or octal (starting with 0) form. boot Specifies that the device is bootable. The attribute's order determines the order in which devices will be tried during boot sequence. The per-device boot elements cannot be used together with general boot elements in BIOS boot loader section. rom Used to change how a PCI device's ROM is presented to the guest virtual machine. The optional bar attribute can be set to on or off , and determines whether or not the device's ROM will be visible in the guest virtual machine's memory map. (In PCI documentation, the rom bar setting controls the presence of the Base Address Register for the ROM). If no rom bar is specified, the default setting will be used. The optional file attribute is used to point to a binary file to be presented to the guest virtual machine as the device's ROM BIOS. This can be useful for example to provide a PXE boot ROM for a virtual function of an SR-IOV capable ethernet device (which has no boot ROMs for the VFs). address Also has a bus and device attribute to specify the USB bus and device number the device appears at on the host physical machine. The values of these attributes can be given in decimal, hexadecimal (starting with 0x) or octal (starting with 0) form. For PCI devices, the element carries 3 attributes allowing to designate the device as can be found with lspci or with virsh nodedev-list . 23.17.5.2. Block / character devices The host physical machine's block / character devices can be passed through to the guest virtual machine by using management tools to modify the domain XML hostdev element. Note that this is only possible with container-based virtualization. ... <hostdev mode='capabilities' type='storage'> <source> <block>/dev/sdf1</block> </source> </hostdev> ... Figure 23.41. Devices - Host physical machine device assignment block character devices An alternative approach is this: ... <hostdev mode='capabilities' type='misc'> <source> <char>/dev/input/event3</char> </source> </hostdev> ... Figure 23.42. Devices - Host physical machine device assignment block character devices alternative 1 Another alternative approach is this: ... <hostdev mode='capabilities' type='net'> <source> <interface>eth0</interface> </source> </hostdev> ... Figure 23.43. Devices - Host physical machine device assignment block character devices alternative 2 The components of this section of the domain XML are as follows: Table 23.17. Block / character device elements Parameter Description hostdev This is the main container for describing host physical machine devices. For block/character devices, passthrough mode is always capabilities , and type is block for a block device and char for a character device. source This describes the device as seen from the host physical machine. For block devices, the path to the block device in the host physical machine operating system is provided in the nested block element, while for character devices, the char element is used. 23.17.6. Redirected devices USB device redirection through a character device is configured by modifying the following section of the domain XML: ... <devices> <redirdev bus='usb' type='tcp'> <source mode='connect' host='localhost' service='4000'/> <boot order='1'/> </redirdev> <redirfilter> <usbdev class='0x08' vendor='0x1234' product='0xbeef' version='2.00' allow='yes'/> <usbdev allow='no'/> </redirfilter> </devices> ... Figure 23.44. Devices - redirected devices The components of this section of the domain XML are as follows: Table 23.18. Redirected device elements Parameter Description redirdev This is the main container for describing redirected devices. bus must be usb for a USB device. An additional attribute type is required, matching one of the supported serial device types, to describe the host physical machine side of the tunnel: type='tcp' or type='spicevmc' (which uses the usbredir channel of a SPICE graphics device) are typical. The redirdev element has an optional sub-element, address , which can tie the device to a particular controller. Further sub-elements, such as source , may be required according to the given type , although a target sub-element is not required (since the consumer of the character device is the hypervisor itself, rather than a device visible in the guest virtual machine). boot Specifies that the device is bootable. The order attribute determines the order in which devices will be tried during boot sequence. The per-device boot elements cannot be used together with general boot elements in BIOS boot loader section. redirfilter This is used for creating the filter rule to filter out certain devices from redirection. It uses sub-element usbdev to define each filter rule. The class attribute is the USB Class code. 23.17.7. Smartcard Devices A virtual smartcard device can be supplied to the guest virtual machine via the smartcard element. A USB smartcard reader device on the host physical machine cannot be used on a guest virtual machine with device passthrough. This is because it cannot be made available to both the host physical machine and guest virtual machine, and can lock the host physical machine computer when it is removed from the guest virtual machine. Therefore, some hypervisors provide a specialized virtual device that can present a smartcard interface to the guest virtual machine, with several modes for describing how the credentials are obtained from the host physical machine or even a from a channel created to a third-party smartcard provider. Configure USB device redirection through a character device with management tools to modify the following section of the domain XML: ... <devices> <smartcard mode='host'/> <smartcard mode='host-certificates'> <certificate>cert1</certificate> <certificate>cert2</certificate> <certificate>cert3</certificate> <database>/etc/pki/nssdb/</database> </smartcard> <smartcard mode='passthrough' type='tcp'> <source mode='bind' host='127.0.0.1' service='2001'/> <protocol type='raw'/> <address type='ccid' controller='0' slot='0'/> </smartcard> <smartcard mode='passthrough' type='spicevmc'/> </devices> ... Figure 23.45. Devices - smartcard devices The smartcard element has a mandatory attribute mode . In each mode, the guest virtual machine sees a device on its USB bus that behaves like a physical USB CCID (Chip/Smart Card Interface Device) card. The mode attributes are as follows: Table 23.19. Smartcard mode elements Parameter Description mode='host' In this mode, the hypervisor relays all requests from the guest virtual machine into direct access to the host physical machine's smartcard via NSS. No other attributes or sub-elements are required. See below about the use of an optional address sub-element. mode='host-certificates' This mode allows you to provide three NSS certificate names residing in a database on the host physical machine, rather than requiring a smartcard to be plugged into the host physical machine. These certificates can be generated using the command certutil -d /etc/pki/nssdb -x -t CT,CT,CT -S -s CN=cert1 -n cert1, and the resulting three certificate names must be supplied as the content of each of three certificate sub-elements. An additional sub-element database can specify the absolute path to an alternate directory (matching the -d flag of the certutil command when creating the certificates); if not present, it defaults to /etc/pki/nssdb . mode='passthrough' Using this mode allows you to tunnel all requests through a secondary character device to a third-party provider (which may in turn be communicating to a smartcard or using three certificate files, rather than having the hypervisor directly communicate with the host physical machine. In this mode of operation, an additional attribute type is required, matching one of the supported serial device types, to describe the host physical machine side of the tunnel; type='tcp' or type='spicevmc' (which uses the smartcard channel of a SPICE graphics device) are typical. Further sub-elements, such as source , may be required according to the given type, although a target sub-element is not required (since the consumer of the character device is the hypervisor itself, rather than a device visible in the guest virtual machine). Each mode supports an optional sub-element address , which fine-tunes the correlation between the smartcard and a ccid bus controller. For more information, see Section 23.17.2, "Device Addresses" ). 23.17.8. Network Interfaces Modify the network interface devices using management tools to configure the following part of the domain XML: ... <devices> <interface type='direct' trustGuestRxFilters='yes'> <source dev='eth0'/> <mac address='52:54:00:5d:c7:9e'/> <boot order='1'/> <rom bar='off'/> </interface> </devices> ... Figure 23.46. Devices - network interfaces There are several possibilities for configuring the network interface for the guest virtual machine. This is done by setting a value to the interface element's type attribute. The following values may be used: "direct" - Attaches the guest virtual machine's NIC to the physical NIC on the host physical machine. For details and an example, see Section 23.17.8.6, "Direct attachment to physical interfaces" . "network" - This is the recommended configuration for general guest virtual machine connectivity on host physical machines with dynamic or wireless networking configurations. For details and an example, see Section 23.17.8.1, "Virtual networks" . "bridge" - This is the recommended configuration setting for guest virtual machine connectivity on host physical machines with static wired networking configurations. For details and an example, see Section 23.17.8.2, "Bridge to LAN" . "ethernet" - Provides a means for the administrator to execute an arbitrary script to connect the guest virtual machine's network to the LAN. For details and an example, see Section 23.17.8.5, "Generic Ethernet connection" . "hostdev" - Allows a PCI network device to be directly assigned to the guest virtual machine using generic device passthrough. For details and an example, see Section 23.17.8.7, "PCI passthrough" . "mcast" - A multicast group can be used to represent a virtual network. For details and an example, see Section 23.17.8.8, "Multicast tunnel" . "user" - Using the user option sets the user space SLIRP stack parameters provides a virtual LAN with NAT to the outside world. For details and an example, see Section 23.17.8.4, "User space SLIRP stack" . "server" - Using the server option creates a TCP client-server architecture in order to provide a virtual network where one guest virtual machine provides the server end of the network and all other guest virtual machines are configured as clients. For details and an example, see Section 23.17.8.9, "TCP tunnel" . Each of these options has a link to give more details. Additionally, each <interface> element can be defined with an optional <trustGuestRxFilters> attribute which allows host physical machine to detect and trust reports received from the guest virtual machine. These reports are sent each time the interface receives changes to the filter. This includes changes to the primary MAC address, the device address filter, or the vlan configuration. The <trustGuestRxFilters> attribute is disabled by default for security reasons. It should also be noted that support for this attribute depends on the guest network device model as well as on the host physical machine's connection type. Currently, it is only supported for the virtio device models and for macvtap connections on the host physical machine. A simple use case where it is recommended to set the optional parameter <trustGuestRxFilters> is if you want to give your guest virtual machines the permission to control host physical machine side filters, as any filters that are set by the guest will also be mirrored on the host. In addition to the attributes listed above, each <interface> element can take an optional <address> sub-element that can tie the interface to a particular PCI slot, with attribute type='pci' . For more information, see Section 23.17.2, "Device Addresses" . 23.17.8.1. Virtual networks This is the recommended configuration for general guest virtual machine connectivity on host physical machines with dynamic or wireless networking configurations (or multi-host physical machine environments where the host physical machine hardware details, which are described separately in a <network> definition). In addition, it provides a connection with details that are described by the named network definition. Depending on the virtual network's forward mode configuration, the network may be totally isolated (no <forward> element given), using NAT to connect to an explicit network device or to the default route ( forward mode='nat' ), routed with no NAT ( forward mode='route' ), or connected directly to one of the host physical machine's network interfaces (using macvtap) or bridge devices ( forward mode='bridge|private|vepa|passthrough' ) For networks with a forward mode of bridge , private , vepa , and passthrough , it is assumed that the host physical machine has any necessary DNS and DHCP services already set up outside the scope of libvirt. In the case of isolated, nat, and routed networks, DHCP and DNS are provided on the virtual network by libvirt, and the IP range can be determined by examining the virtual network configuration with virsh net-dumpxml [networkname] . The 'default' virtual network, which is set up out of the box, uses NAT to connect to the default route and has an IP range of 192.168.122.0/255.255.255.0. Each guest virtual machine will have an associated tun device created with a name of vnetN, which can also be overridden with the <target> element (refer to Section 23.17.8.11, "Overriding the target element" ). When the source of an interface is a network, a port group can be specified along with the name of the network; one network may have multiple portgroups defined, with each portgroup containing slightly different configuration information for different classes of network connections. Also, similar to <direct> network connections (described below), a connection of type network may specify a <virtualport> element, with configuration data to be forwarded to a 802.1Qbg or 802.1Qbh-compliant Virtual Ethernet Port Aggregator (VEPA)switch, or to an Open vSwitch virtual switch. Since the type of switch is dependent on the configuration setting in the <network> element on the host physical machine, it is acceptable to omit the <virtualport type> attribute. You will need to specify the <virtualport type> either once or many times. When the domain starts up a complete <virtualport> element is constructed by merging together the type and attributes defined. This results in a newly-constructed virtual port. Note that the attributes from lower virtual ports cannot make changes on the attributes defined in higher virtual ports. Interfaces take the highest priority, while port group is lowest priority. For example, to create a properly working network with both an 802.1Qbh switch and an Open vSwitch switch, you may choose to specify no type, but both profileid and an interfaceid must be supplied. The other attributes to be filled in from the virtual port, such as such as managerid , typeid , or profileid , are optional. If you want to limit a guest virtual machine to connecting only to certain types of switches, you can specify the virtualport type, and only switches with the specified port type will connect. You can also further limit switch connectivity by specifying additional parameters. As a result, if the port was specified and the host physical machine's network has a different type of virtualport, the connection of the interface will fail. The virtual network parameters are defined using management tools that modify the following part of the domain XML: ... <devices> <interface type='network'> <source network='default'/> </interface> ... <interface type='network'> <source network='default' portgroup='engineering'/> <target dev='vnet7'/> <mac address="00:11:22:33:44:55"/> <virtualport> <parameters instanceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> </devices> ... Figure 23.47. Devices - network interfaces- virtual networks 23.17.8.2. Bridge to LAN As mentioned in, Section 23.17.8, "Network Interfaces" , this is the recommended configuration setting for guest virtual machine connectivity on host physical machines with static wired networking configurations. Bridge to LAN provides a bridge from the guest virtual machine directly onto the LAN. This assumes there is a bridge device on the host physical machine which has one or more of the host physical machines physical NICs enslaved. The guest virtual machine will have an associated tun device created with a name of <vnetN> , which can also be overridden with the <target> element (refer to Section 23.17.8.11, "Overriding the target element" ). The <tun> device will be enslaved to the bridge. The IP range or network configuration is the same as what is used on the LAN. This provides the guest virtual machine full incoming and outgoing network access, just like a physical machine. On Linux systems, the bridge device is normally a standard Linux host physical machine bridge. On host physical machines that support Open vSwitch, it is also possible to connect to an Open vSwitch bridge device by adding virtualport type='openvswitch'/ to the interface definition. The Open vSwitch type virtualport accepts two parameters in its parameters element: an interfaceid which is a standard UUID used to uniquely identify this particular interface to Open vSwitch (if you do no specify one, a random interfaceid will be generated when first defining the interface), and an optional profileid which is sent to Open vSwitch as the interfaces <port-profile> . To set the bridge to LAN settings, use a management tool that will configure the following part of the domain XML: ... <devices> ... <interface type='bridge'> <source bridge='br0'/> </interface> <interface type='bridge'> <source bridge='br1'/> <target dev='vnet7'/> <mac address="00:11:22:33:44:55"/> </interface> <interface type='bridge'> <source bridge='ovsbr'/> <virtualport type='openvswitch'> <parameters profileid='menial' interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> ... </devices> Figure 23.48. Devices - network interfaces- bridge to LAN 23.17.8.3. Setting a port masquerading range In cases where you want to set the port masquerading range, set the port as follows: <forward mode='nat'> <address start='1.2.3.4' end='1.2.3.10'/> </forward> ... Figure 23.49. Port Masquerading Range These values should be set using the iptables commands as shown in Section 17.3, "Network Address Translation" 23.17.8.4. User space SLIRP stack Setting the user space SLIRP stack parameters provides a virtual LAN with NAT to the outside world. The virtual network has DHCP and DNS services and will give the guest virtual machine an IP addresses starting from 10.0.2.15. The default router is 10.0.2.2 and the DNS server is 10.0.2.3. This networking is the only option for unprivileged users who need their guest virtual machines to have outgoing access. The user space SLIRP stack parameters are defined in the following part of the domain XML: ... <devices> <interface type='user'/> ... <interface type='user'> <mac address="00:11:22:33:44:55"/> </interface> </devices> ... Figure 23.50. Devices - network interfaces- User space SLIRP stack 23.17.8.5. Generic Ethernet connection This provides a means for the administrator to execute an arbitrary script to connect the guest virtual machine's network to the LAN. The guest virtual machine will have a <tun> device created with a name of vnetN , which can also be overridden with the <target> element. After creating the tun device a shell script will be run and complete the required host physical machine network integration. By default, this script is called /etc/qemu-ifup but can be overridden (refer to Section 23.17.8.11, "Overriding the target element" ). The generic ethernet connection parameters are defined in the following part of the domain XML: ... <devices> <interface type='ethernet'/> ... <interface type='ethernet'> <target dev='vnet7'/> <script path='/etc/qemu-ifup-mynet'/> </interface> </devices> ... Figure 23.51. Devices - network interfaces- generic ethernet connection 23.17.8.6. Direct attachment to physical interfaces This directly attaches the guest virtual machine's NIC to the physical interface of the host physical machine, if the physical interface is specified. This requires the Linux macvtap driver to be available. One of the following mode attribute values vepa ( 'Virtual Ethernet Port Aggregator'), bridge or private can be chosen for the operation mode of the macvtap device. vepa is the default mode. Manipulating direct attachment to physical interfaces involves setting the following parameters in this section of the domain XML: ... <devices> ... <interface type='direct'> <source dev='eth0' mode='vepa'/> </interface> </devices> ... Figure 23.52. Devices - network interfaces- direct attachment to physical interfaces The individual modes cause the delivery of packets to behave as shown in Table 23.20, "Direct attachment to physical interface elements" : Table 23.20. Direct attachment to physical interface elements Element Description vepa All of the guest virtual machines' packets are sent to the external bridge. Packets whose destination is a guest virtual machine on the same host physical machine as where the packet originates from are sent back to the host physical machine by the VEPA capable bridge (today's bridges are typically not VEPA capable). bridge Packets whose destination is on the same host physical machine as where they originate from are directly delivered to the target macvtap device. Both origin and destination devices need to be in bridge mode for direct delivery. If either one of them is in vepa mode, a VEPA capable bridge is required. private All packets are sent to the external bridge and will only be delivered to a target virtual machine on the same host physical machine if they are sent through an external router or gateway and that device sends them back to the host physical machine. This procedure is followed if either the source or destination device is in private mode. passthrough This feature attaches a virtual function of a SR-IOV capable NIC directly to a guest virtual machine without losing the migration capability. All packets are sent to the VF/IF of the configured network device. Depending on the capabilities of the device, additional prerequisites or limitations may apply; for example, this requires kernel 2.6.38 or later. The network access of directly attached virtual machines can be managed by the hardware switch to which the physical interface of the host physical machine is connected to. The interface can have additional parameters as shown below, if the switch conforms to the IEEE 802.1Qbg standard. The parameters of the virtualport element are documented in more detail in the IEEE 802.1Qbg standard. The values are network specific and should be provided by the network administrator. In 802.1Qbg terms, the Virtual Station Interface (VSI) represents the virtual interface of a virtual machine. Note that IEEE 802.1Qbg requires a non-zero value for the VLAN ID. Additional elements that can be manipulated are described in Table 23.21, "Direct attachment to physical interface additional elements" : Table 23.21. Direct attachment to physical interface additional elements Element Description managerid The VSI Manager ID identifies the database containing the VSI type and instance definitions. This is an integer value and the value 0 is reserved. typeid The VSI Type ID identifies a VSI type characterizing the network access. VSI types are typically managed by network administrator. This is an integer value. typeidversion The VSI Type Version allows multiple versions of a VSI Type. This is an integer value. instanceid The VSI Instance ID Identifier is generated when a VSI instance (a virtual interface of a virtual machine) is created. This is a globally unique identifier. profileid The profile ID contains the name of the port profile that is to be applied onto this interface. This name is resolved by the port profile database into the network parameters from the port profile, and those network parameters will be applied to this interface. Additional parameters in the domain XML include: ... <devices> ... <interface type='direct'> <source dev='eth0.2' mode='vepa'/> <virtualport type="802.1Qbg"> <parameters managerid="11" typeid="1193047" typeidversion="2" instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/> </virtualport> </interface> </devices> ... Figure 23.53. Devices - network interfaces- direct attachment to physical interfaces additional parameters The interface can have additional parameters as shown below if the switch conforms to the IEEE 802.1Qbh standard. The values are network specific and should be provided by the network administrator. Additional parameters in the domain XML include: ... <devices> ... <interface type='direct'> <source dev='eth0' mode='private'/> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices> ... Figure 23.54. Devices - network interfaces - direct attachment to physical interfaces more additional parameters The profileid attribute contains the name of the port profile to be applied to this interface. This name is resolved by the port profile database into the network parameters from the port profile, and those network parameters will be applied to this interface. 23.17.8.7. PCI passthrough A PCI network device (specified by the source element) is directly assigned to the guest virtual machine using generic device passthrough, after first optionally setting the device's MAC address to the configured value, and associating the device with an 802.1Qbh capable switch using an optionally specified virtualport element (see the examples of virtualport given above for type='direct' network devices). Note that due to limitations in standard single-port PCI ethernet card driver design, only SR-IOV (Single Root I/O Virtualization) virtual function (VF) devices can be assigned in this manner. To assign a standard single-port PCI or PCIe ethernet card to a guest virtual machine, use the traditional hostdev device definition. Note that this "intelligent passthrough" of network devices is very similar to the functionality of a standard hostdev device, the difference being that this method allows specifying a MAC address and virtualport for the passed-through device. If these capabilities are not required, if you have a standard single-port PCI, PCIe, or USB network card that does not support SR-IOV (and hence would anyway lose the configured MAC address during reset after being assigned to the guest virtual machine domain), or if you are using libvirt version older than 0.9.11, use standard hostdev definition to assign the device to the guest virtual machine instead of interface type='hostdev' . ... <devices> <interface type='hostdev'> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </source> <mac address='52:54:00:6d:90:02'> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices> ... Figure 23.55. Devices - network interfaces- PCI passthrough 23.17.8.8. Multicast tunnel A multicast group can be used to represent a virtual network. Any guest virtual machine with network devices within the same multicast group will communicate with each other, even if they reside across multiple physical host physical machines. This mode may be used as an unprivileged user. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the guest virtual machines should have a second NIC which is connected to one of the first 4 network types in order to provide appropriate routing. The multicast protocol is compatible with protocols used by user mode Linux guest virtual machines as well. Note that the source address used must be from the multicast address block. A multicast tunnel is created by manipulating the interface type using a management tool and setting it to mcast , and providing a mac address and source address , for example: ... <devices> <interface type='mcast'> <mac address='52:54:00:6d:90:01'> <source address='230.0.0.1' port='5558'/> </interface> </devices> ... Figure 23.56. Devices - network interfaces- multicast tunnel 23.17.8.9. TCP tunnel Creating a TCP client-server architecture is another way to provide a virtual network where one guest virtual machine provides the server end of the network and all other guest virtual machines are configured as clients. All network traffic between the guest virtual machines is routed through the guest virtual machine that is configured as the server. This model is also available for use to unprivileged users. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the guest virtual machines should have a second NIC which is connected to one of the first 4 network types thereby providing the appropriate routing. A TCP tunnel is created by manipulating the interface type using a management tool and setting it to mcast , and providing a mac address and source address , for example: ... <devices> <interface type='server'> <mac address='52:54:00:22:c9:42'> <source address='192.168.0.1' port='5558'/> </interface> ... <interface type='client'> <mac address='52:54:00:8b:c9:51'> <source address='192.168.0.1' port='5558'/> </interface> </devices> ... Figure 23.57. Devices - network interfaces- TCP tunnel 23.17.8.10. Setting NIC driver-specific options Some NICs may have tunable driver-specific options. These options are set as attributes of the driver sub-element of the interface definition. These options are set by using management tools to configure the following sections of the domain XML: <devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'/> </interface> </devices> ... Figure 23.58. Devices - network interfaces- setting NIC driver-specific options The following attributes are available for the "virtio" NIC driver: Table 23.22. virtio NIC driver elements Parameter Description name The optional name attribute forces which type of back-end driver to use. The value can be either kvm (a user-space back-end) or vhost (a kernel back-end, which requires the vhost module to be provided by the kernel); an attempt to require the vhost driver without kernel support will be rejected. The default setting is vhost if the vhost driver is present, but will silently fall back to kvm if not. txmode Specifies how to handle transmission of packets when the transmit buffer is full. The value can be either iothread or timer . If set to iothread , packet tx is all done in an iothread in the bottom half of the driver (this option translates into adding "tx=bh" to the kvm command-line "-device" virtio-net-pci option). If set to timer , tx work is done in KVM, and if there is more tx data than can be sent at the present time, a timer is set before KVM moves on to do other things; when the timer fires, another attempt is made to send more data. It is not recommended to change this value. ioeventfd Sets domain I/O asynchronous handling for the interface device. The default is left to the discretion of the hypervisor. Accepted values are on and off . Enabling this option allows KVM to execute a guest virtual machine while a separate thread handles I/O. Typically, guest virtual machines experiencing high system CPU utilization during I/O will benefit from this. On the other hand, overloading the physical host machine may also increase guest virtual machine I/O latency. It is not recommended to change this value. event_idx The event_idx attribute controls some aspects of device event processing. The value can be either on or off . on is the default, which reduces the number of interrupts and exits for the guest virtual machine. In situations where this behavior is sub-optimal, this attribute provides a way to force the feature off. It is not recommended to change this value. 23.17.8.11. Overriding the target element To override the target element, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> </interface> </devices> ... Figure 23.59. Devices - network interfaces- overriding the target element If no target is specified, certain hypervisors will automatically generate a name for the created tun device. This name can be manually specified, however the name must not start with either vnet or vif , which are prefixes reserved by libvirt and certain hypervisors. Manually-specified targets using these prefixes will be ignored. 23.17.8.12. Specifying boot order To specify the boot order, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <boot order='1'/> </interface> </devices> ... Figure 23.60. Specifying boot order In hypervisors which support it, you can set a specific NIC to be used for the network boot. The order of attributes determine the order in which devices will be tried during boot sequence. Note that the per-device boot elements cannot be used together with general boot elements in BIOS boot loader section. 23.17.8.13. Interface ROM BIOS configuration To specify the ROM BIOS configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <rom bar='on' file='/etc/fake/boot.bin'/> </interface> </devices> ... Figure 23.61. Interface ROM BIOS configuration For hypervisors that support it, you can change how a PCI Network device's ROM is presented to the guest virtual machine. The bar attribute can be set to on or off , and determines whether or not the device's ROM will be visible in the guest virtual machine's memory map. (In PCI documentation, the rom bar setting controls the presence of the Base Address Register for the ROM). If no rom bar is specified, the KVM default will be used (older versions of KVM used off for the default, while newer KVM hypervisors default to on ). The optional file attribute is used to point to a binary file to be presented to the guest virtual machine as the device's ROM BIOS. This can be useful to provide an alternative boot ROM for a network device. 23.17.8.14. Quality of service (QoS) Incoming and outgoing traffic can be shaped independently to set Quality of Service (QoS). The bandwidth element can have at most one inbound and one outbound child elements. Leaving any of these child elements out results in no QoS being applied on that traffic direction. Therefore, to shape only a domain's incoming traffic, use inbound only, and vice versa. Each of these elements has one mandatory attribute average (or floor as described below). Average specifies the average bit rate on the interface being shaped. In addition, there are two optional attributes: peak - This attribute specifies the maximum rate at which the bridge can send data, in kilobytes a second. A limitation of this implementation is this attribute in the outbound element is ignored, as Linux ingress filters do not know it yet. burst - Specifies the amount of bytes that can be burst at peak speed. Accepted values for attributes are integer numbers. The units for average and peak attributes are kilobytes per second, whereas burst is only set in kilobytes. In addition, inbound traffic can optionally have a floor attribute. This guarantees minimal throughput for shaped interfaces. Using the floor requires that all traffic goes through one point where QoS decisions can take place. As such, it may only be used in cases where the interface type='network'/ with a forward type of route , nat , or no forward at all). Noted that within a virtual network, all connected interfaces are required to have at least the inbound QoS set ( average at least) but the floor attribute does not require specifying average . However, peak and burst attributes still require average . At the present time, ingress qdiscs may not have any classes, and therefore, floor may only be applied only on inbound and not outbound traffic. To specify the QoS configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='network'> <source network='default'/> <target dev='vnet0'/> <bandwidth> <inbound average='1000' peak='5000' floor='200' burst='1024'/> <outbound average='128' peak='256' burst='256'/> </bandwidth> </interface> <devices> ... Figure 23.62. Quality of service 23.17.8.15. Setting VLAN tag (on supported network types only) To specify the VLAN tag configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='bridge'> <vlan> <tag id='42'/> </vlan> <source bridge='ovsbr0'/> <virtualport type='openvswitch'> <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> <devices> ... Figure 23.63. Setting VLAN tag (on supported network types only) If the network connection used by the guest virtual machine supports VLAN tagging transparent to the guest virtual machine, an optional vlan element can specify one or more VLAN tags to apply to the guest virtual machine's network traffic. Only OpenvSwitch and type='hostdev' SR-IOV interfaces support transparent VLAN tagging of guest virtual machine traffic; other interfaces, including standard Linux bridges and libvirt's own virtual networks, do not support it. 802.1Qbh (vn-link) and 802.1Qbg (VEPA) switches provide their own methods (outside of libvirt) to tag guest virtual machine traffic onto specific VLANs. To allow for specification of multiple tags (in the case of VLAN trunking), the tag subelement specifies which VLAN tag to use (for example, tag id='42'/ ). If an interface has more than one vlan element defined, it is assumed that the user wants to do VLAN trunking using all the specified tags. If VLAN trunking with a single tag is needed, the optional attribute trunk='yes' can be added to the top-level vlan element. 23.17.8.16. Modifying virtual link state This element sets the virtual network link state. Possible values for attribute state are up and down . If down is specified as the value, the interface behaves as the network cable is disconnected. Default behavior if this element is unspecified is up . To specify the virtual link state configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='network'> <source network='default'/> <target dev='vnet0'/> <link state='down'/> </interface> <devices> ... Figure 23.64. Modifying virtual link state 23.17.9. Input Devices Input devices allow interaction with the graphical framebuffer in the guest virtual machine. When enabling the framebuffer, an input device is automatically provided. It may be possible to add additional devices explicitly, for example to provide a graphics tablet for absolute cursor movement. To specify the input device configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <input type='mouse' bus='usb'/> </devices> ... Figure 23.65. Input devices The <input> element has one mandatory attribute: type , which can be set to mouse or tablet . tablet provides absolute cursor movement, while mouse uses relative movement. The optional bus attribute can be used to refine the exact device type and can be set to kvm (paravirtualized), ps2 , and usb . The input element has an optional sub-element <address> , which can tie the device to a particular PCI slot, as documented above. 23.17.10. Hub Devices A hub is a device that expands a single port into several so that there are more ports available to connect devices to a host physical machine system. To specify the hub device configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <hub type='usb'/> </devices> ... Figure 23.66. Hub devices The hub element has one mandatory attribute, type , which can only be set to usb . The hub element has an optional sub-element, address , with type='usb' , which can tie the device to a particular controller. 23.17.11. Graphical Framebuffers A graphics device allows for graphical interaction with the guest virtual machine operating system. A guest virtual machine will typically have either a framebuffer or a text console configured to allow interaction with the user. To specify the graphical framebuffer device configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <graphics type='sdl' display=':0.0'/> <graphics type='vnc' port='5904'> <listen type='address' address='1.2.3.4'/> </graphics> <graphics type='rdp' autoport='yes' multiUser='yes' /> <graphics type='desktop' fullscreen='yes'/> <graphics type='spice'> <listen type='network' network='rednet'/> </graphics> </devices> ... Figure 23.67. Graphical framebuffers The graphics element has a mandatory type attribute, which takes the value sdl , vnc , rdp , desktop or spice , as explained in the tables below: Table 23.23. Graphical framebuffer main elements Parameter Description sdl This displays a window on the host physical machine desktop. It accepts the following optional arguments: A display attribute for the display to use An xauth attribute for the authentication identifier An optional fullscreen attribute accepting values yes or no vnc Starts a VNC server. The port attribute specifies the TCP port number (with -1 as legacy syntax indicating that it should be auto-allocated). The autoport attribute is the preferred syntax for indicating auto-allocation of the TCP port to use. The listen attribute is an IP address for the server to listen on. The passwd attribute provides a VNC password in clear text. The keymap attribute specifies the keymap to use. It is possible to set a limit on the validity of the password be giving an timestamp passwdValidTo='2010-04-09T15:51:00' assumed to be in UTC. The connected attribute allows control of connected client during password changes. VNC accepts the keep value only; note that it may not be supported by all hypervisors. Rather than using listen/port, KVM supports a socket attribute for listening on a UNIX domain socket path. spice Starts a SPICE server. The port attribute specifies the TCP port number (with -1 as legacy syntax indicating that it should be auto-allocated), while tlsPort gives an alternative secure port number. The autoport attribute is the new preferred syntax for indicating auto-allocation of both port numbers. The listen attribute is an IP address for the server to listen on. The passwd attribute provides a SPICE password in clear text. The keymap attribute specifies the keymap to use. It is possible to set a limit on the validity of the password be giving an timestamp passwdValidTo='2010-04-09T15:51:00' assumed to be in UTC. The connected attribute allows control of a connected client during password changes. SPICE accepts keep to keep a client connected, disconnect to disconnect the client and fail to fail changing password. Note that this is not supported by all hypervisors. The defaultMode attribute sets the default channel security policy; valid values are secure , insecure and the default any (which is secure if possible, but falls back to insecure rather than erroring out if no secure path is available). When SPICE has both a normal and a TLS-secured TCP port configured, it may be desirable to restrict what channels can be run on each port. To do this, add one or more channel elements inside the main graphics element. Valid channel names include main , display , inputs , cursor , playback , record , smartcard , and usbredir . To specify the SPICE configuration settings, use a mangement tool to make the following changes to the domain XML: <graphics type='spice' port='-1' tlsPort='-1' autoport='yes'> <channel name='main' mode='secure'/> <channel name='record' mode='insecure'/> <image compression='auto_glz'/> <streaming mode='filter'/> <clipboard copypaste='no'/> <mouse mode='client'/> </graphics> Figure 23.68. Sample SPICE configuration SPICE supports variable compression settings for audio, images and streaming. These settings are configured using the compression attribute in the following elements: image to set image compression (accepts auto_glz , auto_lz , quic , glz , lz , off ) jpeg for JPEG compression for images over WAN (accepts auto , never , always ) zlib for configuring WAN image compression (accepts auto , never , always ) and playback for enabling audio stream compression (accepts on or off ) The streaming element sets streaming mode. The mode attribute can be set to filter , all or off . In addition, copy and paste functionality (through the SPICE agent) is set by the clipboard element. It is enabled by default, and can be disabled by setting the copypaste property to no . The mouse element sets mouse mode. The mode attribute can be set to server or client . If no mode is specified, the KVM default will be used ( client mode). Additional elements include: Table 23.24. Additional graphical framebuffer elements Parameter Description rdp Starts an RDP server. The port attribute specifies the TCP port number (with -1 as legacy syntax indicating that it should be auto-allocated). The autoport attribute is the preferred syntax for indicating auto-allocation of the TCP port to use. The replaceUser attribute is a boolean deciding whether multiple simultaneous connections to the virtual machine are permitted. The multiUser attribute decides whether the existing connection must be dropped and a new connection must be established by the VRDP server, when a new client connects in single connection mode. desktop This value is currently reserved for VirtualBox domains. It displays a window on the host physical machine desktop, similarly to sdl , but uses the VirtualBox viewer. Just like sdl , it accepts the optional attributes display and fullscreen . listen Rather than inputting the address information used to set up the listening socket for graphics types vnc and spice , the listen attribute, a separate sub-element of graphics , can be specified (see the examples above). listen accepts the following attributes: type - Set to either address or network . This tells whether this listen element is specifying the address to be used directly, or by naming a network (which will then be used to determine an appropriate address for listening). address - This attribute will contain either an IP address or host name (which will be resolved to an IP address via a DNS query) to listen on. In the "live" XML of a running domain, this attribute will be set to the IP address used for listening, even if type='network' . network - If type='network' , the network attribute will contain the name of a network in libvirt's list of configured networks. The named network configuration will be examined to determine an appropriate listen address. For example, if the network has an IPv4 address in its configuration (for example, if it has a forward type of route, NAT, or an isolated type), the first IPv4 address listed in the network's configuration will be used. If the network is describing a host physical machine bridge, the first IPv4 address associated with that bridge device will be used. If the network is describing one of the 'direct' (macvtap) modes, the first IPv4 address of the first forward dev will be used. 23.17.12. Video Devices To specify the video device configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <video> <model type='vga' vram='8192' heads='1'> <acceleration accel3d='yes' accel2d='yes'/> </model> </video> </devices> ... Figure 23.69. Video devices The graphics element has a mandatory type attribute which takes the value "sdl", "vnc", "rdp" or "desktop" as explained below: Table 23.25. Graphical framebuffer elements Parameter Description video The video element is the container for describing video devices. For backwards compatibility, if no video is set but there is a graphics element in the domain XML, then libvirt will add a default video according to the guest virtual machine type. If "ram" or "vram" are not supplied, a default value is used. model This has a mandatory type attribute which takes the value vga , cirrus , vmvga , kvm , vbox , or qxl depending on the hypervisor features available. You can also provide the amount of video memory in kibibytes (blocks of 1024 bytes) using vram and the number of figure with heads. acceleration If acceleration is supported it should be enabled using the accel3d and accel2d attributes in the acceleration element. address The optional address sub-element can be used to tie the video device to a particular PCI slot. 23.17.13. Consoles, Serial, and Channel Devices A character device provides a way to interact with the virtual machine. Paravirtualized consoles, serial ports, and channels are all classed as character devices and are represented using the same syntax. To specify the consoles, channel and other device configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <channel type='unix'> <source mode='bind' path='/tmp/guestfwd'/> <target type='guestfwd' address='10.0.2.1' port='4600'/> </channel> </devices> ... Figure 23.70. Consoles, serial, and channel devices In each of these directives, the top-level element name ( serial , console , channel ) describes how the device is presented to the guest virtual machine. The guest virtual machine interface is configured by the target element. The interface presented to the host physical machine is given in the type attribute of the top-level element. The host physical machine interface is configured by the source element. The source element may contain an optional seclabel to override the way that labeling is done on the socket path. If this element is not present, the security label is inherited from the per-domain setting. Each character device element has an optional sub-element address which can tie the device to a particular controller or PCI slot. Note Parallel ports, as well as the isa-parallel device, are no longer supported. 23.17.14. Guest Virtual Machine Interfaces A character device presents itself to the guest virtual machine as one of the following types. To set the serial port, use a management tool to make the following change to the domain XML: ... <devices> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> </devices> ... Figure 23.71. Guest virtual machine interface serial port <target> can have a port attribute, which specifies the port number. Ports are numbered starting from 0. There are usually 0, 1 or 2 serial ports. There is also an optional type attribute, which has two choices for its value, isa-serial or usb-serial . If type is missing, isa-serial will be used by default. For usb-serial , an optional sub-element <address> with type='usb' can tie the device to a particular controller, documented above. The <console> element is used to represent interactive consoles. Depending on the type of guest virtual machine in use, the consoles might be paravirtualized devices, or they might be a clone of a serial device, according to the following rules: If no targetType attribute is set, then the default device type is according to the hypervisor's rules. The default type will be added when re-querying the XML fed into libvirt. For fully virtualized guest virtual machines, the default device type will usually be a serial port. If the targetType attribute is serial , and if no <serial> element exists, the console element will be copied to the <serial> element. If a <serial> element does already exist, the console element will be ignored. If the targetType attribute is not serial , it will be treated normally. Only the first <console> element may use a targetType of serial . Secondary consoles must all be paravirtualized. On s390, the console element may use a targetType of sclp or sclplm (line mode). SCLP is the native console type for s390. There is no controller associated to SCLP consoles. In the example below, a virtio console device is exposed in the guest virtual machine as /dev/hvc[0-7] (for more information, see the Fedora project's virtio-serial page ): ... <devices> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <!-- KVM virtio console --> <console type='pty'> <source path='/dev/pts/5'/> <target type='virtio' port='0'/> </console> </devices> ... ... <devices> <!-- KVM s390 sclp console --> <console type='pty'> <source path='/dev/pts/1'/> <target type='sclp' port='0'/> </console> </devices> ... Figure 23.72. Guest virtual machine interface - virtio console device If the console is presented as a serial port, the <target> element has the same attributes as for a serial port. There is usually only one console. 23.17.15. Channel This represents a private communication channel between the host physical machine and the guest virtual machine. It is manipulated by making changes to a guest virtual machine using a management tool to edit following section of the domain XML: ... <devices> <channel type='unix'> <source mode='bind' path='/tmp/guestfwd'/> <target type='guestfwd' address='10.0.2.1' port='4600'/> </channel> <!-- KVM virtio channel --> <channel type='pty'> <target type='virtio' name='arbitrary.virtio.serial.port.name'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/kvm/f16x86_64.agent'/> <target type='virtio' name='org.kvm.guest_agent.0'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> </channel> </devices> ... Figure 23.73. Channel This can be implemented in a variety of ways. The specific type of <channel> is given in the type attribute of the <target> element. Different channel types have different target attributes as follows: guestfwd - Dictates that TCP traffic sent by the guest virtual machine to a given IP address and port is forwarded to the channel device on the host physical machine. The target element must have address and port attributes. virtio - a paravirtualized virtio channel. In a Linux guest operating system, the <channel> configuration changes the content of /dev/vport* files. If the optional element name is specified, the configuration instead uses a /dev/virtio-ports/USDname file. For more information, see the Fedora project's virtio-serial page . The optional element address can tie the channel to a particular type='virtio-serial' controller, documented above. With KVM, if name is "org.kvm.guest_agent.0", then libvirt can interact with a guest agent installed in the guest virtual machine, for actions such as guest virtual machine shutdown or file system quiescing. spicevmc - Paravirtualized SPICE channel. The domain must also have a SPICE server as a graphics device, at which point the host physical machine piggy-backs messages across the main channel. The target element must be present, with attribute type='virtio'; an optional attribute name controls how the guest virtual machine will have access to the channel, and defaults to name='com.redhat.spice.0' . The optional <address> element can tie the channel to a particular type='virtio-serial' controller. 23.17.16. Host Physical Machine Interface A character device presents itself to the host physical machine as one of the following types: Table 23.26. Character device elements Parameter Description XML snippet Domain logfile Disables all input on the character device, and sends output into the virtual machine's logfile. Device logfile A file is opened and all data sent to the character device is written to the file. Note that the destination directory must have the virt_log_t SELinux label for a guest with this setting to start successfully. Virtual console Connects the character device to the graphical framebuffer in a virtual console. This is typically accessed using a special hotkey sequence such as "ctrl+alt+3". Null device Connects the character device to the void. No data is ever provided to the input. All data written is discarded. Pseudo TTY A Pseudo TTY is allocated using /dev/ptmx . A suitable client such as virsh console can connect to interact with the serial port locally. NB Special case NB special case if <console type='pty'> , then the TTY path is also duplicated as an attribute tty='/dev/pts/3' on the top level <console> tag. This provides compat with existing syntax for <console> tags. Host physical machine device proxy The character device is passed through to the underlying physical character device. The device types must match, for example the emulated serial port should only be connected to a host physical machine serial port - do not connect a serial port to a parallel port. Named pipe The character device writes output to a named pipe. See the pipe(7) man page for more info. TCP client-server The character device acts as a TCP client connecting to a remote server. Or as a TCP server waiting for a client connection. Alternatively you can use telnet instead of raw TCP. In addition, you can also use telnets (secure telnet) and tls. UDP network console The character device acts as a UDP netconsole service, sending and receiving packets. This is a lossy service. UNIX domain socket client-server The character device acts as a UNIX domain socket server, accepting connections from local clients. 23.17.17. Sound Devices A virtual sound card can be attached to the host physical machine using the sound element. ... <devices> <sound model='ac97'/> </devices> ... Figure 23.74. Virtual sound card The sound element has one mandatory attribute, model , which specifies what real sound device is emulated. Valid values are specific to the underlying hypervisor, though typical choices are 'sb16' , 'ac97' , and 'ich6' . In addition, a sound element with 'ich6' model set can have optional codec sub-elements to attach various audio codecs to the audio device. If not specified, a default codec will be attached to allow playback and recording. Valid values are 'duplex' (advertises a line-in and a line-out) and 'micro' (advertises a speaker and a microphone). ... <devices> <sound model='ich6'> <codec type='micro'/> <sound/> </devices> ... Figure 23.75. Sound Devices Each sound element has an optional sub-element <address> which can tie the device to a particular PCI slot, documented above. Note The es1370 sound device is no longer supported in Red Hat Enterprise Linux 7. Use ac97 instead. 23.17.18. Watchdog Device A virtual hardware watchdog device can be added to the guest virtual machine using the <watchdog> element. The watchdog device requires an additional driver and management daemon in the guest virtual machine. Currently there is no support notification when the watchdog fires. ... <devices> <watchdog model='i6300esb'/> </devices> ... ... <devices> <watchdog model='i6300esb' action='poweroff'/> </devices> ... Figure 23.76. Watchdog Device The following attributes are declared in this XML: model - The required model attribute specifies what real watchdog device is emulated. Valid values are specific to the underlying hypervisor. The model attribute may take the following values: i6300esb - the recommended device, emulating a PCI Intel 6300ESB ib700 - emulates an ISA iBase IB700 action - The optional action attribute describes what action to take when the watchdog expires. Valid values are specific to the underlying hypervisor. The action attribute can have the following values: reset - default setting, forcefully resets the guest virtual machine shutdown - gracefully shuts down the guest virtual machine (not recommended) poweroff - forcefully powers off the guest virtual machine pause - pauses the guest virtual machine none - does nothing dump - automatically dumps the guest virtual machine. Note that the 'shutdown' action requires that the guest virtual machine is responsive to ACPI signals. In the sort of situations where the watchdog has expired, guest virtual machines are usually unable to respond to ACPI signals. Therefore, using 'shutdown' is not recommended. In addition, the directory to save dump files can be configured by auto_dump_path in file /etc/libvirt/kvm.conf. 23.17.19. Setting a Panic Device Red Hat Enterprise Linux 7 hypervisor is capable of detecting Linux guest virtual machine kernel panics, using the pvpanic mechanism. When invoked, pvpanic sends a message to the libvirtd daemon, which initiates a preconfigured reaction. To enable the pvpanic device, do the following: Add or uncomment the following line in the /etc/libvirt/qemu.conf file on the host machine. Run the virsh edit command to edit domain XML file of the specified guest, and add the panic into the devices parent element. <devices> <panic> <address type='isa' iobase='0x505'/> </panic> </devices> The <address> element specifies the address of panic. The default ioport is 0x505. In most cases, specifying an address is not needed. The way in which libvirtd reacts to the crash is determined by the <on_crash> element of the domain XML. The possible actions are as follows: coredump-destroy - Captures the guest virtual machine's core dump and shuts the guest down. coredump-restart - Captures the guest virtual machine's core dump and restarts the guest. preserve - Halts the guest virtual machine to await further action. Note If the kdump service is enabled, it takes precedence over the <on_crash> setting, and the selected <on_crash> action is not performed. For more information on pvpanic , see the related Knowledgebase article . 23.17.20. Memory Balloon Device The balloon device can designate a part of a virtual machine's RAM as not being used (a process known as inflating the balloon), so that the memory can be freed for the host, or for other virtual machines on that host, to use. When the virtual machine needs the memory again, the balloon can be deflated and the host can distribute the RAM back to the virtual machine. The size of the memory balloon is determined by the difference between the <currentMemory> and <memory> settings. For example, if <memory> is set to 2 GiB and <currentMemory> to 1 GiB, the balloon contains 1 GiB. If manual configuration is necessary, the <currentMemory> value can be set by using the virsh setmem command and the <memory> value can be set by using the virsh setmaxmem command. Warning If modifying the <currentMemory> value, make sure to leave sufficient memory for the guest OS to work properly. If the set value is too low, the guest may become unstable. A virtual memory balloon device is automatically added to all KVM guest virtual machines. In the XML configuration, this is represented by the <memballoon> element. Memory ballooning is managed by the libvirt service, and will be automatically added when appropriate. Therefore, it is not necessary to explicitly add this element in the guest virtual machine XML unless a specific PCI slot needs to be assigned. Note that if the <memballoon> device needs to be explicitly disabled, model='none' can be be used for this purpose. The following example a shows a memballoon device automatically added by libvirt : ... <devices> <memballoon model='virtio'/> </devices> ... Figure 23.77. Memory balloon device The following example shows a device that has been added manually with static PCI slot 2 requested: ... <devices> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </memballoon> </devices> ... Figure 23.78. Memory balloon device added manually The required model attribute specifies what type of balloon device is provided. Valid values are specific to the virtualization platform; in the KVM hypervisor, 'virtio' is the default setting.
|
[
"<devices> <emulator>/usr/libexec/qemu-kvm</emulator> </devices>",
"<disk type='network'> <driver name=\"qemu\" type=\"raw\" io=\"threads\" ioeventfd=\"on\" event_idx=\"off\"/> <source protocol=\"sheepdog\" name=\"image_name\"> <host name=\"hostname\" port=\"7000\"/> </source> <target dev=\"hdb\" bus=\"ide\"/> <boot order='1'/> <transient/> <address type='drive' controller='0' bus='1' unit='0'/> </disk>",
"<disk type='network'> <driver name=\"qemu\" type=\"raw\"/> <source protocol=\"rbd\" name=\"image_name2\"> <host name=\"hostname\" port=\"7000\"/> </source> <target dev=\"hdd\" bus=\"ide\"/> <auth username='myuser'> <secret type='ceph' usage='mypassid'/> </auth> </disk>",
"<disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol=\"http\" name=\"url_path\"> <host name=\"hostname\" port=\"80\"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk>",
"<disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol=\"https\" name=\"url_path\"> <host name=\"hostname\" port=\"443\"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol=\"ftp\" name=\"url_path\"> <host name=\"hostname\" port=\"21\"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk>",
"<disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol=\"ftps\" name=\"url_path\"> <host name=\"hostname\" port=\"990\"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol=\"tftp\" name=\"url_path\"> <host name=\"hostname\" port=\"69\"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='block' device='lun'> <driver name='qemu' type='raw'/> <source dev='/dev/sda'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='3' unit='0'/> </disk>",
"<disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/sda'/> <geometry cyls='16383' heads='16' secs='63' trans='lba'/> <blockio logical_block_size='512' physical_block_size='4096'/> <target dev='hda' bus='ide'/> </disk> <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='blk-pool0' volume='blk-pool0-vol0'/> <target dev='hda' bus='ide'/> </disk> <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/2'> <host name='example.com' port='3260'/> </source> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk>",
"<disk type='network' device='lun'> <driver name='qemu' type='raw'/> <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/1'> iqn.2013-07.com.example:iscsi-pool <host name='example.com' port='3260'/> </source> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='sda' bus='scsi'/> </disk> <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='iscsi-pool' volume='unit:0:0:1' mode='host'/> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk>",
"<disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='iscsi-pool' volume='unit:0:0:2' mode='direct'/> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/tmp/test.img' startupPolicy='optional'/> <target dev='sdb' bus='scsi'/> <readonly/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source file='/var/lib/libvirt/images/discard1.img'/> <target dev='vdb' bus='virtio'/> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </disk> </devices>",
"<devices> <controller type='ide' index='0'/> <controller type='virtio-serial' index='0' ports='16' vectors='4'/> <controller type='virtio-serial' index='1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> <controller type='scsi' index='0' model='virtio-scsi' num_queues='8'/> </controller> </devices>",
"<devices> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0' bus='0' slot='4' function='7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0' bus='0' slot='4' function='0' multifunction='on'/> </controller> </devices>",
"<devices> <lease> <lockspace>somearea</lockspace> <key>somekey</key> <target path='/some/lease/path' offset='1024'/> </lease> </devices>",
"<devices> <hostdev mode='subsystem' type='usb'> <source startupPolicy='optional'> <vendor id='0x1234'/> <product id='0xbeef'/> </source> <boot order='2'/> </hostdev> </devices>",
"<devices> <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address bus='0x06' slot='0x02' function='0x0'/> </source> <boot order='1'/> <rom bar='on' file='/etc/fake/boot.bin'/> </hostdev> </devices>",
"<devices> <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host0'/> <address type='scsi' bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> </devices> ..",
"<hostdev mode='capabilities' type='storage'> <source> <block>/dev/sdf1</block> </source> </hostdev>",
"<hostdev mode='capabilities' type='misc'> <source> <char>/dev/input/event3</char> </source> </hostdev>",
"<hostdev mode='capabilities' type='net'> <source> <interface>eth0</interface> </source> </hostdev>",
"<devices> <redirdev bus='usb' type='tcp'> <source mode='connect' host='localhost' service='4000'/> <boot order='1'/> </redirdev> <redirfilter> <usbdev class='0x08' vendor='0x1234' product='0xbeef' version='2.00' allow='yes'/> <usbdev allow='no'/> </redirfilter> </devices>",
"<devices> <smartcard mode='host'/> <smartcard mode='host-certificates'> <certificate>cert1</certificate> <certificate>cert2</certificate> <certificate>cert3</certificate> <database>/etc/pki/nssdb/</database> </smartcard> <smartcard mode='passthrough' type='tcp'> <source mode='bind' host='127.0.0.1' service='2001'/> <protocol type='raw'/> <address type='ccid' controller='0' slot='0'/> </smartcard> <smartcard mode='passthrough' type='spicevmc'/> </devices>",
"<devices> <interface type='direct' trustGuestRxFilters='yes'> <source dev='eth0'/> <mac address='52:54:00:5d:c7:9e'/> <boot order='1'/> <rom bar='off'/> </interface> </devices>",
"<devices> <interface type='network'> <source network='default'/> </interface> <interface type='network'> <source network='default' portgroup='engineering'/> <target dev='vnet7'/> <mac address=\"00:11:22:33:44:55\"/> <virtualport> <parameters instanceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> </devices>",
"<devices> <interface type='bridge'> <source bridge='br0'/> </interface> <interface type='bridge'> <source bridge='br1'/> <target dev='vnet7'/> <mac address=\"00:11:22:33:44:55\"/> </interface> <interface type='bridge'> <source bridge='ovsbr'/> <virtualport type='openvswitch'> <parameters profileid='menial' interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> </devices>",
"<forward mode='nat'> <address start='1.2.3.4' end='1.2.3.10'/> </forward>",
"<devices> <interface type='user'/> <interface type='user'> <mac address=\"00:11:22:33:44:55\"/> </interface> </devices>",
"<devices> <interface type='ethernet'/> <interface type='ethernet'> <target dev='vnet7'/> <script path='/etc/qemu-ifup-mynet'/> </interface> </devices>",
"<devices> <interface type='direct'> <source dev='eth0' mode='vepa'/> </interface> </devices>",
"<devices> <interface type='direct'> <source dev='eth0.2' mode='vepa'/> <virtualport type=\"802.1Qbg\"> <parameters managerid=\"11\" typeid=\"1193047\" typeidversion=\"2\" instanceid=\"09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f\"/> </virtualport> </interface> </devices>",
"<devices> <interface type='direct'> <source dev='eth0' mode='private'/> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices>",
"<devices> <interface type='hostdev'> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </source> <mac address='52:54:00:6d:90:02'> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices>",
"<devices> <interface type='mcast'> <mac address='52:54:00:6d:90:01'> <source address='230.0.0.1' port='5558'/> </interface> </devices>",
"<devices> <interface type='server'> <mac address='52:54:00:22:c9:42'> <source address='192.168.0.1' port='5558'/> </interface> <interface type='client'> <mac address='52:54:00:8b:c9:51'> <source address='192.168.0.1' port='5558'/> </interface> </devices>",
"<devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'/> </interface> </devices>",
"<devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> </interface> </devices>",
"<devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <boot order='1'/> </interface> </devices>",
"<devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <rom bar='on' file='/etc/fake/boot.bin'/> </interface> </devices>",
"<devices> <interface type='network'> <source network='default'/> <target dev='vnet0'/> <bandwidth> <inbound average='1000' peak='5000' floor='200' burst='1024'/> <outbound average='128' peak='256' burst='256'/> </bandwidth> </interface> <devices>",
"<devices> <interface type='bridge'> <vlan> <tag id='42'/> </vlan> <source bridge='ovsbr0'/> <virtualport type='openvswitch'> <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> <devices>",
"<devices> <interface type='network'> <source network='default'/> <target dev='vnet0'/> <link state='down'/> </interface> <devices>",
"<devices> <input type='mouse' bus='usb'/> </devices>",
"<devices> <hub type='usb'/> </devices>",
"<devices> <graphics type='sdl' display=':0.0'/> <graphics type='vnc' port='5904'> <listen type='address' address='1.2.3.4'/> </graphics> <graphics type='rdp' autoport='yes' multiUser='yes' /> <graphics type='desktop' fullscreen='yes'/> <graphics type='spice'> <listen type='network' network='rednet'/> </graphics> </devices>",
"<graphics type='spice' port='-1' tlsPort='-1' autoport='yes'> <channel name='main' mode='secure'/> <channel name='record' mode='insecure'/> <image compression='auto_glz'/> <streaming mode='filter'/> <clipboard copypaste='no'/> <mouse mode='client'/> </graphics>",
"<devices> <video> <model type='vga' vram='8192' heads='1'> <acceleration accel3d='yes' accel2d='yes'/> </model> </video> </devices>",
"<devices> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <channel type='unix'> <source mode='bind' path='/tmp/guestfwd'/> <target type='guestfwd' address='10.0.2.1' port='4600'/> </channel> </devices>",
"<devices> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> </devices>",
"<devices> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <!-- KVM virtio console --> <console type='pty'> <source path='/dev/pts/5'/> <target type='virtio' port='0'/> </console> </devices> <devices> <!-- KVM s390 sclp console --> <console type='pty'> <source path='/dev/pts/1'/> <target type='sclp' port='0'/> </console> </devices>",
"<devices> <channel type='unix'> <source mode='bind' path='/tmp/guestfwd'/> <target type='guestfwd' address='10.0.2.1' port='4600'/> </channel> <!-- KVM virtio channel --> <channel type='pty'> <target type='virtio' name='arbitrary.virtio.serial.port.name'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/kvm/f16x86_64.agent'/> <target type='virtio' name='org.kvm.guest_agent.0'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> </channel> </devices>",
"<devices> <console type='stdio'> <target port='1'/> </console> </devices>",
"<devices> <serial type=\"file\"> <source path=\"/var/log/vm/vm-serial.log\"/> <target port=\"1\"/> </serial> </devices>",
"<devices> <serial type='vc'> <target port=\"1\"/> </serial> </devices>",
"<devices> <serial type='null'> <target port=\"1\"/> </serial> </devices>",
"<devices> <serial type=\"pty\"> <source path=\"/dev/pts/3\"/> <target port=\"1\"/> </serial> </devices>",
"<devices> <serial type=\"dev\"> <source path=\"/dev/ttyS0\"/> <target port=\"1\"/> </serial> </devices>",
"<devices> <serial type=\"pipe\"> <source path=\"/tmp/mypipe\"/> <target port=\"1\"/> </serial> </devices>",
"<devices> <serial type=\"tcp\"> <source mode=\"connect\" host=\"0.0.0.0\" service=\"2445\"/> <protocol type=\"raw\"/> <target port=\"1\"/> </serial> </devices>",
"<devices> <serial type=\"tcp\"> <source mode=\"bind\" host=\"127.0.0.1\" service=\"2445\"/> <protocol type=\"raw\"/> <target port=\"1\"/> </serial> </devices>",
"<devices> <serial type=\"tcp\"> <source mode=\"connect\" host=\"0.0.0.0\" service=\"2445\"/> <protocol type=\"telnet\"/> <target port=\"1\"/> </serial> <serial type=\"tcp\"> <source mode=\"bind\" host=\"127.0.0.1\" service=\"2445\"/> <protocol type=\"telnet\"/> <target port=\"1\"/> </serial> </devices>",
"<devices> <serial type=\"udp\"> <source mode=\"bind\" host=\"0.0.0.0\" service=\"2445\"/> <source mode=\"connect\" host=\"0.0.0.0\" service=\"2445\"/> <target port=\"1\"/> </serial> </devices>",
"<devices> <serial type=\"unix\"> <source mode=\"bind\" path=\"/tmp/foo\"/> <target port=\"1\"/> </serial> </devices>",
"<devices> <sound model='ac97'/> </devices>",
"<devices> <sound model='ich6'> <codec type='micro'/> <sound/> </devices>",
"<devices> <watchdog model='i6300esb'/> </devices> <devices> <watchdog model='i6300esb' action='poweroff'/> </devices>",
"auto_dump_path = \"/var/lib/libvirt/qemu/dump\"",
"<devices> <panic> <address type='isa' iobase='0x505'/> </panic> </devices>",
"<devices> <memballoon model='virtio'/> </devices>",
"<devices> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </memballoon> </devices>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-Devices
|
Chapter 1. Using Tekton Chains for OpenShift Pipelines supply chain security
|
Chapter 1. Using Tekton Chains for OpenShift Pipelines supply chain security Tekton Chains is a Kubernetes Custom Resource Definition (CRD) controller. You can use it to manage the supply chain security of the tasks and pipelines created using Red Hat OpenShift Pipelines. By default, Tekton Chains observes all task run executions in your OpenShift Container Platform cluster. When the task runs complete, Tekton Chains takes a snapshot of the task runs. It then converts the snapshot to one or more standard payload formats, and finally signs and stores all artifacts. To capture information about task runs, Tekton Chains uses Result objects. When the objects are unavailable, Tekton Chains the URLs and qualified digests of the OCI images. 1.1. Key features You can sign task runs, task run results, and OCI registry images with cryptographic keys that are generated by tools such as cosign and skopeo . You can use attestation formats such as in-toto . You can securely store signatures and signed artifacts using OCI repository as a storage backend. 1.2. Configuring Tekton Chains The Red Hat OpenShift Pipelines Operator installs Tekton Chains by default. You can configure Tekton Chains by modifying the TektonConfig custom resource; the Operator automatically applies the changes that you make in this custom resource. To edit the custom resource, use the following command: USD oc edit TektonConfig config The custom resource includes a chain: array. You can add any supported configuration parameters to this array, as shown in the following example: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: addon: {} chain: artifacts.taskrun.format: tekton config: {} 1.2.1. Supported parameters for Tekton Chains configuration Cluster administrators can use various supported parameter keys and values to configure specifications about task runs, OCI images, and storage. 1.2.1.1. Supported parameters for task run artifacts Table 1.1. Chains configuration: Supported parameters for task run artifacts Key Description Supported values Default value artifacts.taskrun.format The format for storing task run payloads. in-toto , slsa/v1 in-toto artifacts.taskrun.storage The storage backend for task run signatures. You can specify multiple backends as a comma-separated list, such as "tekton,oci" . To disable storing task run artifacts, provide an empty string "" . tekton , oci , gcs , docdb , grafeas oci artifacts.taskrun.signer The signature backend for signing task run payloads. x509 , kms x509 Note slsa/v1 is an alias of in-toto for backwards compatibility. 1.2.1.2. Supported parameters for pipeline run artifacts Table 1.2. Chains configuration: Supported parameters for pipeline run artifacts Parameter Description Supported values Default value artifacts.pipelinerun.format The format for storing pipeline run payloads. in-toto , slsa/v1 in-toto artifacts.pipelinerun.storage The storage backend for storing pipeline run signatures. You can specify multiple backends as a comma-separated list, such as "tekton,oci" . To disable storing pipeline run artifacts, provide an empty string "" . tekton , oci , gcs , docdb , grafeas oci artifacts.pipelinerun.signer The signature backend for signing pipeline run payloads. x509 , kms x509 artifacts.pipelinerun.enable-deep-inspection When this parameter is true , Tekton Chains records the results of the child task runs of a pipeline run. When this parameter is false , Tekton Chains records the results of the pipeline run, but not of its child task runs. "true", "false" "false" Note slsa/v1 is an alias of in-toto for backwards compatibility. For the grafeas storage backend, only Container Analysis is supported. You can not configure the grafeas server address in the current version of Tekton Chains. 1.2.1.3. Supported parameters for OCI artifacts Table 1.3. Chains configuration: Supported parameters for OCI artifacts Parameter Description Supported values Default value artifacts.oci.format The format for storing OCI payloads. simplesigning simplesigning artifacts.oci.storage The storage backend for storing OCI signatures. You can specify multiple backends as a comma-separated list, such as "oci,tekton" . To disable storing OCI artifacts, provide an empty string "" . tekton , oci , gcs , docdb , grafeas oci artifacts.oci.signer The signature backend for signing OCI payloads. x509 , kms x509 1.2.1.4. Supported parameters for KMS signers Table 1.4. Chains configuration: Supported parameters for KMS signers Parameter Description Supported values Default value signers.kms.kmsref The URI reference to a KMS service to use in kms signers. Supported schemes: gcpkms:// , awskms:// , azurekms:// , hashivault:// . See Providers in the Sigstore documentation for more details. 1.2.1.5. Supported parameters for storage Table 1.5. Chains configuration: Supported parameters for storage Parameter Description Supported values Default value storage.gcs.bucket The GCS bucket for storage storage.oci.repository The OCI repository for storing OCI signatures and attestation. If you configure one of the artifact storage backends to oci and do not define this key, Tekton Chains stores the attestation alongside the stored OCI artifact itself. If you define this key, the attestation is not stored alongside the OCI artifact and is instead stored in the designated location. See the cosign documentation for additional information. builder.id The builder ID to set for in-toto attestations https://tekton.dev/chains/v2 builddefinition.buildtype The build type for in-toto attestation. When this parameter is https://tekton.dev/chains/v2/slsa , Tekton Chains records in-toto attestations in strict conformance with the SLSA v1.0 specification. When this parameter is https://tekton.dev/chains/v2/slsa-tekton , Tekton Chains records in-toto attestations with additional information, such as the labels and annotations in each TaskRun and PipelineRun object, and also adds each task in a PipelineRun object under resolvedDependencies . https://tekton.dev/chains/v2/slsa , https://tekton.dev/chains/v2/slsa-tekton https://tekton.dev/chains/v2/slsa If you enable the docdb storage method is for any artifacts, configure docstore storage options. For more information about the go-cloud docstore URI format, see the docstore package documentation . Red Hat OpenShift Pipelines supports the following docstore services: firestore dynamodb Table 1.6. Chains configuration: Supported parameters for docstore storage Parameter Description Supported values Default value storage.docdb.url The go-cloud URI reference to a docstore collection. Used if the docdb storage method is enabled for any artifacts. firestore://projects/[PROJECT]/databases/(default)/documents/[COLLECTION]?name_field=name storage.docdb.mongo-server-url The value for the Mongo server URL to use for docdb storage ( MONGO_SERVER_URL ). This URL can include authentication information. For production environments, providing authentication information as plain-text configuration might be insecure. Use the alternative storage.docdb.mongo-server-url-dir configuration setting for production environments. storage.docdb.mongo-server-url-dir The directory where a file named MONGO_SERVER_URL is located. This file contains the Mongo server URL to use for docdb storage ( MONGO_SERVER_URL ). Provide this file as a secret and configure mounting this file for the Tekton Chains controller, as described in Creating and mounting the Mongo server URL secret . Example value: /tmp/mongo-url If you enable the grafeas storage method for any artifacts, configure Grafeas storage options. For more information about Grafeas notes and occurrences, see Grafeas concepts . To create occurrences, Red Hat OpenShift Pipelines must first create notes that are used to link occurrences. Red Hat OpenShift Pipelines creates two types of occurrences: ATTESTATION Occurrence and BUILD Occurrence. Red Hat OpenShift Pipelines uses the configurable noteid as the prefix of the note name. It appends the suffix -simplesigning for the ATTESTATION note and the suffix -intoto for the BUILD note. If the noteid field is not configured, Red Hat OpenShift Pipelines uses tekton-<NAMESPACE> as the prefix. Table 1.7. Chains configuration: Supported parameters for Grafeas storage Parameter Description Supported values Default value storage.grafeas.projectid The OpenShift Container Platform project in which the Grafeas server for storing occurrences is located. storage.grafeas.noteid Optional: the prefix to use for the name of all created notes. A string without spaces. storage.grafeas.notehint Optional: the human_readable_name field for the Grafeas ATTESTATION note. This attestation note was generated by Tekton Chains Optionally, you can enable additional uploads of binary transparency attestations. Table 1.8. Chains configuration: Supported parameters for transparency attestation storage Parameter Description Supported values Default value transparency.enabled Enable or disable automatic binary transparency uploads. true , false , manual false transparency.url The URL for uploading binary transparency attestations, if enabled. https://rekor.sigstore.dev Note If you set transparency.enabled to manual , only task runs and pipeline runs with the following annotation are uploaded to the transparency log: chains.tekton.dev/transparency-upload: "true" If you configure the x509 signature backend, you can optionally enable keyless signing with Fulcio. Table 1.9. Chains configuration: Supported parameters for x509 keyless signing with Fulcio Parameter Description Supported values Default value signers.x509.fulcio.enabled Enable or disable requesting automatic certificates from Fulcio. true , false false signers.x509.fulcio.address The Fulcio address for requesting certificates, if enabled. https://v1.fulcio.sigstore.dev signers.x509.fulcio.issuer The expected OIDC issuer. https://oauth2.sigstore.dev/auth signers.x509.fulcio.provider The provider from which to request the ID Token. google , spiffe , github , filesystem Red Hat OpenShift Pipelines attempts to use every provider signers.x509.identity.token.file Path to the file containing the ID Token. signers.x509.tuf.mirror.url The URL for the TUF server. USDTUF_URL/root.json must be present. https://sigstore-tuf-root.storage.googleapis.com If you configure the kms signature backend, set the KMS configuration, including OIDC and Spire, as necessary. Table 1.10. Chains configuration: Supported parameters for KMS signing Parameter Description Supported values Default value signers.kms.auth.address URI of the KMS server (the value of VAULT_ADDR ). signers.kms.auth.token Authentication token for the KMS server (the value of VAULT_TOKEN ). Providing the token as plain-text configuration might be insecure. Use the alternative signers.kms.auth.token-path configuration setting for production environments. signers.kms.auth.token-path The full pathname of the file that contains the authentication token for the KMS server (the value of VAULT_TOKEN ). Provide this file as a secret and configure mounting this file for the Tekton Chains controller, as described in Creating and mounting the KMS authentication token secret . Example value: /etc/kms-secrets/KMS_AUTH_TOKEN signers.kms.auth.oidc.path The path for OIDC authentication (for example, jwt for Vault). signers.kms.auth.oidc.role The role for OIDC authentication. signers.kms.auth.spire.sock The URI of the Spire socket for the KMS token (for example, unix:///tmp/spire-agent/public/api.sock ). signers.kms.auth.spire.audience The audience for requesting a SVID from Spire. 1.2.2. Creating and mounting the Mongo server URL secret You can provide the value of the Mongo server URL to use for docdb storage ( MONGO_SERVER_URL ) using a secret. You must create this secret, mount it on the Tekton Chains controller, and set the storage.docdb.mongo-server-url-dir parameter to the directory where the secret is mounted. Prerequisites You installed the OpenShift CLI ( oc ) utility. You are logged in to your OpenShift Container Platform cluster with administrative rights for the openshift-pipelines namespace. Procedure Create a secret named mongo-url with the MONGO_SERVER_URL file that contains the the Mongo server URL value by entering the following command: USD oc create secret generic mongo-url -n tekton-chains \ --from-file=MONGO_SERVER_URL=<path>/MONGO_SERVER_URL 1 1 The full path and name of the MONGO_SERVER_URL file that contains the the Mongo server URL value. In the TektonConfig custom resource (CR), in the chain section, configure mounting the secret on the Tekton Chains controller and set the storage.docdb.mongo-server-url-dir parameter to the directory where the secret is mounted, as shown in the following example: Example configuration for mounting the mongo-url secret apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: # ... chain: disabled: false storage.docdb.mongo-server-url-dir: /tmp/mongo-url options: deployments: tekton-chains-controller: spec: template: spec: containers: - name: tekton-chains-controller volumeMounts: - mountPath: /tmp/mongo-url name: mongo-url volumes: - name: mongo-url secret: secretName: mongo-url # ... 1.2.3. Creating and mounting the KMS authentication token secret You can provide the authentication token for the KMS server using a secret. For example, if the KMS provider is Hashicorp Vault, the secret must contain the value of VAULT_TOKEN . You must create this secret, mount it on the Tekton Chains controller, and set the signers.kms.auth.token-path parameter to the full pathname of the authentication token file. Prerequisites You installed the OpenShift CLI ( oc ) utility. You are logged in to your OpenShift Container Platform cluster with administrative rights for the openshift-pipelines namespace. Procedure Create a secret named kms-secrets with the KMS_AUTH_TOKEN file that contains the authentication token for the KMS server by entering the following command: USD oc create secret generic kms-secrets -n tekton-chains \ --from-file=KMS_AUTH_TOKEN=<path_and_name> 1 1 The full path and name of the file that contains the authentication token for the KMS server, for example, /home/user/KMS_AUTH_TOKEN . You can use another file name instead of KMS_AUTH_TOKEN . In the TektonConfig custom resource (CR), in the chain section, configure mounting the secret on the Tekton Chains controller and set the signers.kms.auth.token-path parameter to the full pathname of the authentication token file, as shown in the following example: Example configuration for mounting the kms-secrets secret apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: # ... chain: disabled: false signers.kms.auth.token-path: /etc/kms-secrets/KMS_AUTH_TOKEN options: deployments: tekton-chains-controller: spec: template: spec: containers: - name: tekton-chains-controller volumeMounts: - mountPath: /etc/kms-secrets name: kms-secrets volumes: - name: kms-secrets secret: secretName: kms-secrets # ... 1.2.4. Enabling Tekton Chains to operate only in selected namespaces By default, the Tekton Chains controller monitors resources in all namespaces. You can customize Tekton Chains to run only in specific namespaces, which provides granular control over its operation. Prerequisites You are logged in to your OpenShift Container Platform cluster with cluster-admin privileges. Procedure In the TektonConfig CR, in the chain section, add the --namespace= argument to contain the namespaces that the controller should monitor. The following example shows the configuration for the Tekton Chains controller to only monitor resources within the dev and test namespaces, filtering PipelineRun and TaskRun objects accordingly: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: chain: disabled: false options: deployments: tekton-chains-controller: spec: template: spec: containers: - args: - --namespace=dev, test 1 name: tekton-chains-controller 1 If the --namespace argument is not provided or is left empty, the controller watches all namespaces by default. 1.3. Secrets for signing data in Tekton Chains Cluster administrators can generate a key pair and use Tekton Chains to sign artifacts using a Kubernetes secret. For Tekton Chains to work, a private key and a password for encrypted keys must exist as part of the signing-secrets secret in the openshift-pipelines namespace. Currently, Tekton Chains supports the x509 and cosign signature schemes. Note Use only one of the supported signature schemes. The x509 signing scheme To use the x509 signing scheme with Tekton Chains, you must fulfill the following requirements: Store the private key in the signing-secrets with the x509.pem structure. Store the private key as an unencrypted PKCS #8 PEM file. The key is of ed25519 or ecdsa type. The cosign signing scheme To use the cosign signing scheme with Tekton Chains, you must fulfill the following requirements: Store the private key in the signing-secrets with the cosign.key structure. Store the password in the signing-secrets with the cosign.password structure. Store the private key as an encrypted PEM file of type ENCRYPTED COSIGN PRIVATE KEY . 1.3.1. Generating the x509 key pair by using the TektonConfig CR To use the x509 signing scheme for Tekton Chains secrets, you must generate the x509 key pair. You can generate the x509 key pair by setting the generateSigningSecret field in the TektonConfig custom resource (CR) to true . The Red Hat OpenShift Pipelines Operator generates an ecdsa type key pair: an x509.pem private key and an x509-pub.pem public key. The Operator stores the keys in the signing-secrets secret in the openshift-pipelines namespace. Warning If you set the generateSigningSecret field from true to false , the Red Hat OpenShift Pipelines Operator overrides and empties any value in the signing-secrets secret. Ensure that you store the x509-pub.pem public key outside of the secret to protect the key from the deletion. The Operator can use the key at a later stage to verify artifact attestations. The Red Hat OpenShift Pipelines Operator does not provide the following functions to limit potential security issues: Key rotation Auditing key usage Proper access control to the key Prerequisites You installed the OpenShift CLI ( oc ) utility. You are logged in to your OpenShift Container Platform cluster with administrative rights for the openshift-pipelines namespace. Procedure Edit the TektonConfig CR by running the following command: USD oc edit TektonConfig config In the TektonConfig CR, set the generateSigningSecret value to true : Example of creating an ecdsa key pair by using the TektonConfig CR apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: # ... chain: disabled: false generateSigningSecret: true 1 # ... 1 The default value is false . Setting the value to true generates the ecdsa key pair. 1.3.2. Signing with the cosign tool You can use the cosign signing scheme with Tekton Chains using the cosign tool. Prerequisites You installed the Cosign tool. For information about installing the Cosign tool, see the Sigstore documentation for Cosign . Procedure Generate the cosign.key and cosign.pub key pairs by running the following command: USD cosign generate-key-pair k8s://openshift-pipelines/signing-secrets Cosign prompts you for a password and then creates a Kubernetes secret. Store the encrypted cosign.key private key and the cosign.password decryption password in the signing-secrets Kubernetes secret. Ensure that the private key is stored as an encrypted PEM file of the ENCRYPTED COSIGN PRIVATE KEY type. 1.3.3. Signing with the skopeo tool You can generate keys using the skopeo tool and use them in the cosign signing scheme with Tekton Chains. Prerequisites You installed the skopeo tool. Procedure Generate a public/private key pair by running the following command: USD skopeo generate-sigstore-key --output-prefix <mykey> 1 1 Replace <mykey> with a key name of your choice. Skopeo prompts you for a passphrase for the private key and then creates the key files named <mykey>.private and <mykey>.pub . Encode the <mykey>.pub file using the base64 tool by running the following command: USD base64 -w 0 <mykey>.pub > b64.pub Encode the <mykey>.private file using the base64 tool by running the following command: USD base64 -w 0 <mykey>.private > b64.private Encode the passhprase using the base64 tool by running the following command: USD echo -n '<passphrase>' | base64 -w 0 > b64.passphrase 1 1 Replace <passphrase> with the passphrase that you used for the key pair. Create the signing-secrets secret in the openshift-pipelines namespace by running the following command: USD oc create secret generic signing-secrets -n openshift-pipelines Edit the signing-secrets secret by running the following command: Add the encoded keys in the data of the secret in the following way: apiVersion: v1 data: cosign.key: <Encoded <mykey>.private> 1 cosign.password: <Encoded passphrase> 2 cosign.pub: <Encoded <mykey>.pub> 3 immutable: true kind: Secret metadata: name: signing-secrets # ... type: Opaque 1 Replace <Encoded <mykey>.private> with the content of the b64.private file. 2 Replace <Encoded passphrase> with the content of the b64.passphrase file. 3 Replace <Encoded <mykey>.pub> with the content of the b64.pub file. 1.3.4. Resolving the "secret already exists" error If the signing-secret secret is already populated, the command to create this secret might output the following error message: Error from server (AlreadyExists): secrets "signing-secrets" already exists You can resolve this error by deleting the secret. Procedure Delete the signing-secret secret by running the following command: USD oc delete secret signing-secrets -n openshift-pipelines Re-create the key pairs and store them in the secret using your preferred signing scheme. 1.4. Authenticating to an OCI registry Before pushing signatures to an OCI registry, cluster administrators must configure Tekton Chains to authenticate with the registry. The Tekton Chains controller uses the same service account under which the task runs execute. To set up a service account with the necessary credentials for pushing signatures to an OCI registry, perform the following steps: Procedure Set the namespace and name of the Kubernetes service account. USD export NAMESPACE=<namespace> 1 USD export SERVICE_ACCOUNT_NAME=<service_account> 2 1 The namespace associated with the service account. 2 The name of the service account. Create a Kubernetes secret. USD oc create secret registry-credentials \ --from-file=.dockerconfigjson \ 1 --type=kubernetes.io/dockerconfigjson \ -n USDNAMESPACE 1 Substitute with the path to your Docker config file. Default path is ~/.docker/config.json . Give the service account access to the secret. USD oc patch serviceaccount USDSERVICE_ACCOUNT_NAME \ -p "{\"imagePullSecrets\": [{\"name\": \"registry-credentials\"}]}" -n USDNAMESPACE If you patch the default pipeline service account that Red Hat OpenShift Pipelines assigns to all task runs, the Red Hat OpenShift Pipelines Operator will override the service account. As a best practice, you can perform the following steps: Create a separate service account to assign to user's task runs. USD oc create serviceaccount <service_account_name> Associate the service account to the task runs by setting the value of the serviceaccountname field in the task run template. apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-push-task-run-2 spec: taskRunTemplate: serviceAccountName: build-bot 1 taskRef: name: build-push ... 1 Substitute with the name of the newly created service account. 1.5. Creating and verifying task run signatures without any additional authentication To verify signatures of task runs using Tekton Chains with any additional authentication, perform the following tasks: Generate an encrypted x509 or cosign key pair and store it as a Kubernetes secret. Configure the Tekton Chains backend storage. Create a task run, sign it, and store the signature and the payload as annotations on the task run itself. Retrieve the signature and payload from the signed task run. Verify the signature of the task run. Prerequisites Ensure that the following components are installed on the cluster: Red Hat OpenShift Pipelines Operator Tekton Chains Cosign Procedure Generate an encrypted x509 or cosign key pair. For more information about creating a key pair and saving it as a secret, see "Secrets for signing data in Tekton Chains". In the Tekton Chains configuration, disable the OCI storage, and set the task run storage and format to tekton . In the TektonConfig custom resource set the following values: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... chain: artifacts.oci.storage: "" artifacts.taskrun.format: tekton artifacts.taskrun.storage: tekton # ... For more information about configuring Tekton Chains using the TektonConfig custom resource, see "Configuring Tekton Chains". To restart the Tekton Chains controller to ensure that the modified configuration is applied, enter the following command: USD oc delete po -n openshift-pipelines -l app=tekton-chains-controller Create a task run by entering the following command: USD oc create -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/taskruns/task-output-image.yaml 1 1 Replace the example URI with the URI or file path pointing to your task run. Example output taskrun.tekton.dev/build-push-run-output-image-qbjvh created Check the status of the steps by entering the following command. Wait until the process finishes. USD tkn tr describe --last Example output [...truncated output...] NAME STATUS ∙ create-dir-builtimage-9467f Completed ∙ git-source-sourcerepo-p2sk8 Completed ∙ build-and-push Completed ∙ echo Completed ∙ image-digest-exporter-xlkn7 Completed To retrieve the signature from the object stored as base64 encoded annotations, enter the following commands: USD tkn tr describe --last -o jsonpath="{.metadata.annotations.chains\.tekton\.dev/signature-taskrun-USDTASKRUN_UID}" | base64 -d > sig USD export TASKRUN_UID=USD(tkn tr describe --last -o jsonpath='{.metadata.uid}') To verify the signature using the public key that you created, enter the following command: 1 Replace path/to/cosign.pub with the path name of the public key file. Example output Verified OK Additional resources Secrets for signing data in Tekton Chains Configuring Tekton Chains 1.6. Using Tekton Chains to sign and verify image and provenance Cluster administrators can use Tekton Chains to sign and verify images and provenances, by performing the following tasks: Generate an encrypted x509 or cosign key pair and store it as a Kubernetes secret. Set up authentication for the OCI registry to store images, image signatures, and signed image attestations. Configure Tekton Chains to generate and sign provenance. Create an image with Kaniko in a task run. Verify the signed image and the signed provenance. Prerequisites Ensure that the following tools are installed on the cluster: Red Hat OpenShift Pipelines Operator Tekton Chains Cosign Rekor jq Procedure Generate an encrypted x509 or cosign key pair. For more information about creating a key pair and saving it as a secret, see "Secrets for signing data in Tekton Chains". Configure authentication for the image registry. To configure the Tekton Chains controller for pushing signature to an OCI registry, use the credentials associated with the service account of the task run. For detailed information, see the "Authenticating to an OCI registry" section. To configure authentication for a Kaniko task that builds and pushes image to the registry, create a Kubernetes secret of the docker config.json file containing the required credentials. USD oc create secret generic <docker_config_secret_name> \ 1 --from-file <path_to_config.json> 2 1 Substitute with the name of the docker config secret. 2 Substitute with the path to docker config.json file. Configure Tekton Chains by setting the artifacts.taskrun.format , artifacts.taskrun.storage , and transparency.enabled parameters in the chains-config object: USD oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.format": "in-toto"}}' USD oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.storage": "oci"}}' USD oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"transparency.enabled": "true"}}' Start the Kaniko task. Apply the Kaniko task to the cluster. USD oc apply -f examples/kaniko/kaniko.yaml 1 1 Substitute with the URI or file path to your Kaniko task. Set the appropriate environment variables. USD export REGISTRY=<url_of_registry> 1 USD export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> 2 1 Substitute with the URL of the registry where you want to push the image. 2 Substitute with the name of the secret in the docker config.json file. Start the Kaniko task. USD tkn task start --param IMAGE=USDREGISTRY/kaniko-chains --use-param-defaults --workspace name=source,emptyDir="" --workspace name=dockerconfig,secret=USDDOCKERCONFIG_SECRET_NAME kaniko-chains Observe the logs of this task until all steps are complete. On successful authentication, the final image will be pushed to USDREGISTRY/kaniko-chains . Wait for a minute to allow Tekton Chains to generate the provenance and sign it, and then check the availability of the chains.tekton.dev/signed=true annotation on the task run. USD oc get tr <task_run_name> \ 1 -o json | jq -r .metadata.annotations { "chains.tekton.dev/signed": "true", ... } 1 Substitute with the name of the task run. Verify the image and the attestation. USD cosign verify --key cosign.pub USDREGISTRY/kaniko-chains USD cosign verify-attestation --key cosign.pub USDREGISTRY/kaniko-chains Find the provenance for the image in Rekor. Get the digest of the USDREGISTRY/kaniko-chains image. You can search for it ing the task run, or pull the image to extract the digest. Search Rekor to find all entries that match the sha256 digest of the image. USD rekor-cli search --sha <image_digest> 1 <uuid_1> 2 <uuid_2> 3 ... 1 Substitute with the sha256 digest of the image. 2 The first matching universally unique identifier (UUID). 3 The second matching UUID. The search result displays UUIDs of the matching entries. One of those UUIDs holds the attestation. Check the attestation. USD rekor-cli get --uuid <uuid> --format json | jq -r .Attestation | base64 --decode | jq 1.7. Additional resources Secrets for signing data in Tekton Chains Installing OpenShift Pipelines
|
[
"oc edit TektonConfig config",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: addon: {} chain: artifacts.taskrun.format: tekton config: {}",
"chains.tekton.dev/transparency-upload: \"true\"",
"oc create secret generic mongo-url -n tekton-chains --from-file=MONGO_SERVER_URL=<path>/MONGO_SERVER_URL 1",
"apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: chain: disabled: false storage.docdb.mongo-server-url-dir: /tmp/mongo-url options: deployments: tekton-chains-controller: spec: template: spec: containers: - name: tekton-chains-controller volumeMounts: - mountPath: /tmp/mongo-url name: mongo-url volumes: - name: mongo-url secret: secretName: mongo-url",
"oc create secret generic kms-secrets -n tekton-chains --from-file=KMS_AUTH_TOKEN=<path_and_name> 1",
"apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: chain: disabled: false signers.kms.auth.token-path: /etc/kms-secrets/KMS_AUTH_TOKEN options: deployments: tekton-chains-controller: spec: template: spec: containers: - name: tekton-chains-controller volumeMounts: - mountPath: /etc/kms-secrets name: kms-secrets volumes: - name: kms-secrets secret: secretName: kms-secrets",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: chain: disabled: false options: deployments: tekton-chains-controller: spec: template: spec: containers: - args: - --namespace=dev, test 1 name: tekton-chains-controller",
"oc edit TektonConfig config",
"apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: chain: disabled: false generateSigningSecret: true 1",
"cosign generate-key-pair k8s://openshift-pipelines/signing-secrets",
"skopeo generate-sigstore-key --output-prefix <mykey> 1",
"base64 -w 0 <mykey>.pub > b64.pub",
"base64 -w 0 <mykey>.private > b64.private",
"echo -n '<passphrase>' | base64 -w 0 > b64.passphrase 1",
"oc create secret generic signing-secrets -n openshift-pipelines",
"oc edit secret -n openshift-pipelines signing-secrets",
"apiVersion: v1 data: cosign.key: <Encoded <mykey>.private> 1 cosign.password: <Encoded passphrase> 2 cosign.pub: <Encoded <mykey>.pub> 3 immutable: true kind: Secret metadata: name: signing-secrets type: Opaque",
"Error from server (AlreadyExists): secrets \"signing-secrets\" already exists",
"oc delete secret signing-secrets -n openshift-pipelines",
"export NAMESPACE=<namespace> 1 export SERVICE_ACCOUNT_NAME=<service_account> 2",
"oc create secret registry-credentials --from-file=.dockerconfigjson \\ 1 --type=kubernetes.io/dockerconfigjson -n USDNAMESPACE",
"oc patch serviceaccount USDSERVICE_ACCOUNT_NAME -p \"{\\\"imagePullSecrets\\\": [{\\\"name\\\": \\\"registry-credentials\\\"}]}\" -n USDNAMESPACE",
"oc create serviceaccount <service_account_name>",
"apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-push-task-run-2 spec: taskRunTemplate: serviceAccountName: build-bot 1 taskRef: name: build-push",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: chain: artifacts.oci.storage: \"\" artifacts.taskrun.format: tekton artifacts.taskrun.storage: tekton",
"oc delete po -n openshift-pipelines -l app=tekton-chains-controller",
"oc create -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/taskruns/task-output-image.yaml 1",
"taskrun.tekton.dev/build-push-run-output-image-qbjvh created",
"tkn tr describe --last",
"[...truncated output...] NAME STATUS ∙ create-dir-builtimage-9467f Completed ∙ git-source-sourcerepo-p2sk8 Completed ∙ build-and-push Completed ∙ echo Completed ∙ image-digest-exporter-xlkn7 Completed",
"tkn tr describe --last -o jsonpath=\"{.metadata.annotations.chains\\.tekton\\.dev/signature-taskrun-USDTASKRUN_UID}\" | base64 -d > sig",
"export TASKRUN_UID=USD(tkn tr describe --last -o jsonpath='{.metadata.uid}')",
"cosign verify-blob-attestation --insecure-ignore-tlog --key path/to/cosign.pub --signature sig --type slsaprovenance --check-claims=false /dev/null 1",
"Verified OK",
"oc create secret generic <docker_config_secret_name> \\ 1 --from-file <path_to_config.json> 2",
"oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"artifacts.taskrun.format\": \"in-toto\"}}' oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"artifacts.taskrun.storage\": \"oci\"}}' oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"transparency.enabled\": \"true\"}}'",
"oc apply -f examples/kaniko/kaniko.yaml 1",
"export REGISTRY=<url_of_registry> 1 export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> 2",
"tkn task start --param IMAGE=USDREGISTRY/kaniko-chains --use-param-defaults --workspace name=source,emptyDir=\"\" --workspace name=dockerconfig,secret=USDDOCKERCONFIG_SECRET_NAME kaniko-chains",
"oc get tr <task_run_name> \\ 1 -o json | jq -r .metadata.annotations { \"chains.tekton.dev/signed\": \"true\", }",
"cosign verify --key cosign.pub USDREGISTRY/kaniko-chains cosign verify-attestation --key cosign.pub USDREGISTRY/kaniko-chains",
"rekor-cli search --sha <image_digest> 1 <uuid_1> 2 <uuid_2> 3",
"rekor-cli get --uuid <uuid> --format json | jq -r .Attestation | base64 --decode | jq"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/securing_openshift_pipelines/using-tekton-chains-for-openshift-pipelines-supply-chain-security
|
Getting started with automation hub
|
Getting started with automation hub Red Hat Ansible Automation Platform 2.4 Configure Red Hat automation hub as your default server for Ansible collections content Red Hat Customer Content Services [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_automation_hub/index
|
Chapter 20. Configuring Layer 3 high availability (HA)
|
Chapter 20. Configuring Layer 3 high availability (HA) 20.1. RHOSP Networking service without high availability (HA) Red Hat OpenStack Platform (RHOSP) Networking service deployments without any high availability (HA) features are vulnerable to physical node failures. In a typical deployment, projects create virtual routers, which are scheduled to run on physical Networking service Layer 3 (L3) agent nodes. This becomes an issue when you lose an L3 agent node and the dependent virtual machines subsequently lose connectivity to external networks. Any floating IP addresses are also unavailable. In addition, connectivity is lost between any networks that the router hosts. 20.2. Overview of Layer 3 high availability (HA) This active/passive high availability (HA) configuration uses the industry standard VRRP (as defined in RFC 3768) to protect project routers and floating IP addresses. A virtual router is randomly scheduled across multiple Red Hat OpenStack Platform (RHOSP) Networking service nodes, with one designated as the active router, and the remainder serving in a standby role. Note To deploy Layer 3 (L3) HA, you must maintain similar configuration on the redundant Networking service nodes, including floating IP ranges and access to external networks. In the following diagram, the active Router1 and Router2 routers are running on separate physical L3 Networking service agent nodes. L3 HA has scheduled backup virtual routers on the corresponding nodes, ready to resume service in the case of a physical node failure. When the L3 agent node fails, L3 HA reschedules the affected virtual router and floating IP addresses to a working node: During a failover event, instance TCP sessions through floating IPs remain unaffected, and migrate to the new L3 node without disruption. Only SNAT traffic is affected by failover events. The L3 agent is further protected when in an active/active HA mode. Additional resources Virtual Router Redundancy Protocol (VRRP) 20.3. Layer 3 high availability (HA) failover conditions Layer 3 (L3) high availability (HA) for the Red Hat OpenStack Platform (RHOSP) Networking service automatically reschedules protected resources in the following events: The Networking service L3 agent node shuts down or otherwise loses power because of a hardware failure. The L3 agent node becomes isolated from the physical network and loses connectivity. Note Manually stopping the L3 agent service does not induce a failover event. 20.4. Project considerations for Layer 3 high availability (HA) Red Hat OpenStack Platform (RHOSP) Networking service Layer 3 (L3) high availability (HA) configuration occurs in the back end and is invisible to the project. Projects can continue to create and manage their virtual routers as usual, however there are some limitations to be aware of when designing your L3 HA implementation: L3 HA supports up to 255 virtual routers per project. Internal VRRP messages are transported within a separate internal network, created automatically for each project. This process occurs transparently to the user. When implementing high availability (HA) routers on ML2/OVS, each L3 agent spawns haproxy and neutron-keepalived-state-change-monitor processes for each router. Each process consumes approximately 20MB of memory. By default, each HA router resides on three L3 agents and consumes resources on each of the nodes. Therefore, when sizing your RHOSP networks, ensure that you have allocated enough memory to support the number of HA routers that you plan to implement. 20.5. High availability (HA) changes to the RHOSP Networking service The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) API has been updated to allow administrators to set the --ha=True/False flag when creating a router, which overrides the default configuration of l3_ha in /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf . High availability (HA) changes to neutron-server: Layer 3 (L3) HA assigns the active role randomly, regardless of the scheduler used by the Networking service (whether random or leastrouter). The database schema has been modified to handle allocation of virtual IP addresses (VIPs) to virtual routers. A transport network is created to direct L3 HA traffic. HA changes to the Networking service L3 agent: A new keepalived manager has been added, providing load-balancing and HA capabilities. IP addresses are converted to VIPs. 20.6. Enabling Layer 3 high availability (HA) on RHOSP Networking service nodes During installation, Red Hat OpenStack Platform (RHOSP) director enables high availability (HA) for virtual routers by default when you have at least two RHOSP Controllers and are not using distributed virtual routing (DVR). Using an RHOSP Orchestration service (heat) parameter, max_l3_agents_per_router , you can set the maximum number of RHOSP Networking service Layer 3 (L3) agents on which an HA router is scheduled. Prerequisites Your RHOSP deployment does not use DVR. You have at least two RHOSP Controllers deployed. Procedure Log in to the undercloud as the stack user, and source the stackrc file to enable the director command line tools. Example Create a custom YAML environment file. Example Tip The Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your heat templates. Set the NeutronL3HA parameter to true in the YAML environment file. This ensures HA is enabled even if director did not set it by default. Set the maximum number of L3 agents on which an HA router is scheduled. Set the max_l3_agents_per_router parameter to a value between the minimum and total number of network nodes in your deployment. (A zero value indicates that the router is scheduled on every agent.) Example In this example, if you deploy four Networking service nodes, only two L3 agents protect each HA virtual router: one active, and one standby. If you set the value of max_l3_agents_per_router to be greater than the number of available network nodes, you can scale out the number of standby routers by adding new L3 agents. For every new L3 agent node that you deploy, the Networking service schedules additional standby versions of the virtual routers until the max_l3_agents_per_router limit is reached. Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Note When NeutronL3HA is set to true , all virtual routers that are created default to HA routers. When you create a router, you can override the HA option by including the --no-ha option in the openstack router create command: Additional resources Environment files in the Advanced Overcloud Customization guide Including environment files in overcloud creation in the Advanced Overcloud Customization guide 20.7. Reviewing high availability (HA) RHOSP Networking service node configurations Procedure Run the ip address command within the virtual router namespace to return a high availability (HA) device in the result, prefixed with ha- . With Layer 3 HA enabled, virtual routers and floating IP addresses are protected against individual node failure.
|
[
"source ~/stackrc",
"vi /home/stack/templates/my-neutron-environment.yaml",
"parameter_defaults: NeutronL3HA: 'true'",
"parameter_defaults: NeutronL3HA: 'true' ControllerExtraConfig: neutron::server::max_l3_agents_per_router: 2",
"openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml",
"openstack router create --no-ha",
"ip netns exec qrouter-b30064f9-414e-4c98-ab42-646197c74020 ip address <snip> 2794: ha-45249562-ec: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 12:34:56:78:2b:5d brd ff:ff:ff:ff:ff:ff inet 169.254.0.2/24 brd 169.254.0.255 scope global ha-54b92d86-4f"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/config-l3-ha_rhosp-network
|
8.4. Guest Virtual Machine Memory Allocation
|
8.4. Guest Virtual Machine Memory Allocation The following procedure shows how to allocate memory for a guest virtual machine. This allocation and assignment works only at boot time and any changes to any of the memory values will not take effect until the reboot. The maximum memory that can be allocated per guest is 4 TiB, providing that this memory allocation is not more than what the host physical machine resources can provide. Valid memory units include: b or bytes for bytes KB for kilobytes (10 3 or blocks of 1,000 bytes) k or KiB for kibibytes (2 10 or blocks of 1024 bytes) MB for megabytes (10 6 or blocks of 1,000,000 bytes) M or MiB for mebibytes (2 20 or blocks of 1,048,576 bytes) GB for gigabytes (10 9 or blocks of 1,000,000,000 bytes) G or GiB for gibibytes (2 30 or blocks of 1,073,741,824 bytes) TB for terabytes (10 12 or blocks of 1,000,000,000,000 bytes) T or TiB for tebibytes (2 40 or blocks of 1,099,511,627,776 bytes) Note that all values will be rounded up to the nearest kibibyte by libvirt, and may be further rounded to the granularity supported by the hypervisor. Some hypervisors also enforce a minimum, such as 4000KiB (or 4000 x 2 10 or 4,096,000 bytes). The units for this value are determined by the optional attribute memory unit , which defaults to the kibibytes (KiB) as a unit of measure where the value given is multiplied by 2 10 or blocks of 1024 bytes. In the cases where the guest virtual machine crashes the optional attribute dumpCore can be used to control whether the guest virtual machine's memory should be included in the generated coredump ( dumpCore='on' ) or not included ( dumpCore='off' ). Note that the default setting is on so if the parameter is not set to off , the guest virtual machine memory will be included in the coredump file. The currentMemory attribute determines the actual memory allocation for a guest virtual machine. This value can be less than the maximum allocation, to allow for ballooning up the guest virtual machines memory on the fly. If this is omitted, it defaults to the same value as the memory element. The unit attribute behaves the same as for memory. In all cases for this section, the domain XML needs to be altered as follows:
|
[
"<domain> <memory unit='KiB' dumpCore='off'>524288</memory> <!-- changes the memory unit to KiB and does not allow the guest virtual machine's memory to be included in the generated coredump file --> <currentMemory unit='KiB'>524288</currentMemory> <!-- makes the current memory unit 524288 KiB --> </domain>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-mem-dump-off
|
Chapter 20. Clustering
|
Chapter 20. Clustering Support for clufter , a tool for transforming and analyzing cluster configuration formats The clufter package, available as a Technology Preview in Red Hat Enterprise Linux 7, provides a tool for transforming and analyzing cluster configuration formats. It can be used to assist with migration from an older stack configuration to a newer configuration that leverages Pacemaker. For information on the capabilities of clufter , see the clufter(1) man page or the output of the clufter -h command.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/technology-preview-clustering
|
Chapter 10. Installing a cluster on Azure in a restricted network with user-provisioned infrastructure
|
Chapter 10. Installing a cluster on Azure in a restricted network with user-provisioned infrastructure In OpenShift Container Platform, you can install a cluster on Microsoft Azure by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you must use that computer to complete all installation steps. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you have manually created long-term credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 10.1. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 10.1.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 10.1.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 10.2. Configuring your Azure project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 10.2.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 44 20 per region A default cluster requires 44 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap and control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the compute machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 44 vCPUs. The bootstrap node VM, which uses 8 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage 10.2.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. You can view Azure's DNS solution by visiting this example for creating DNS zones . 10.2.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 10.2.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 10.2.5. Required Azure roles An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. Before you create the identity, verify that your environment meets the following requirements: The Azure account that you use to create the identity is assigned the User Access Administrator and Contributor roles. These roles are required when: Creating a service principal or user-assigned managed identity. Enabling a system-assigned managed identity on a virtual machine. If you are going to use a service principal to complete the installation, verify that the Azure account that you use to create the identity is assigned the microsoft.directory/servicePrincipals/createAsOwner permission in Microsoft Entra ID. To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 10.2.6. Required Azure permissions for user-provisioned infrastructure The installation program requires access to an Azure service principal or managed identity with the necessary permissions to deploy the cluster and to maintain its daily operation. These permissions must be granted to the Azure subscription that is associated with the identity. The following options are available to you: You can assign the identity the Contributor and User Access Administrator roles. Assigning these roles is the quickest way to grant all of the required permissions. For more information about assigning roles, see the Azure documentation for managing access to Azure resources using the Azure portal . If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 10.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 10.2. Required permissions for creating compute resources Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Microsoft.Compute/availabilitySets/read Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/deallocate/action Example 10.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 10.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Example 10.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 10.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 10.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 10.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 10.9. Required permissions for creating deployments Microsoft.Resources/deployments/read Microsoft.Resources/deployments/write Microsoft.Resources/deployments/validate/action Microsoft.Resources/deployments/operationstatuses/read Example 10.10. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/availabilitySets/write Example 10.11. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 10.12. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. Example 10.13. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 10.14. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/images/delete Example 10.15. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 10.16. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Example 10.17. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 10.18. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 10.19. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions related to resource group creation to your subscription. After the resource group is created, you can scope the rest of the permissions to the created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 10.2.7. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. If you want to use a custom role, you have created a custom role with the required permissions listed in the Required Azure permissions for user-provisioned infrastructure section. Procedure Log in to the Azure CLI: USD az login If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 1 Specify the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 10.2.8. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) newzealandnorth (New Zealand North) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) spaincentral (Spain Central) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 10.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 10.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 10.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 10.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 10.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 10.3.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 10.20. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 10.3.4. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 10.21. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 10.4. Using the Azure Marketplace offering Using the Azure Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through Azure, while still being supported directly by Red Hat. To deploy an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker or control plane nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 Note Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. If you use the Azure Resource Manager (ARM) template to deploy your compute nodes: Update storageProfile.imageReference by deleting the id parameter and adding the offer , publisher , sku , and version parameters by using the values from your offer. Specify a plan for the virtual machines (VMs). Example 06_workers.json ARM template with an updated storageProfile.imageReference object and a specified plan ... "plan" : { "name": "rh-ocp-worker", "product": "rh-ocp-worker", "publisher": "redhat" }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { ... "storageProfile": { "imageReference": { "offer": "rh-ocp-worker", "publisher": "redhat", "sku": "rh-ocp-worker", "version": "413.92.2023101700" } ... } ... } 10.4.1. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 10.4.2. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 10.5. Creating the installation files for Azure To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 10.5.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 10.5.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VNet to install the cluster under the platform.azure field: networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4 1 Replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). 2 Replace <vnet> with the existing virtual network name. 3 Replace <control_plane_subnet> with the existing subnet name to deploy the control plane machines. 4 Replace <compute_subnet> with the existing subnet name to deploy compute machines. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Important Azure Firewall does not work seamlessly with Azure Public Load balancers. Thus, when using Azure Firewall for restricting internet access, the publish field in install-config.yaml should be set to Internal . Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. 10.5.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 10.5.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into, for example centralus . This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the public DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 10.5.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 10.6. Creating the Azure resource group You must create a Microsoft Azure resource group and an identity for that resource group. These are both used during the installation of your OpenShift Container Platform cluster on Azure. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} Create an Azure identity for the resource group: USD az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity This is used to grant the required access to Operators in your cluster. For example, this allows the Ingress Operator to create a public IP and its load balancer. You must assign the Azure identity to a role. Grant the Contributor role to the Azure identity: Export the following variables required by the Azure role assignment: USD export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv` USD export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv` Assign the Contributor role to the identity: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role 'Contributor' --scope "USD{RESOURCE_GROUP_ID}" Note If you want to assign a custom role with all the required permissions to the identity, run the following command: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role <custom_role> \ 1 --scope "USD{RESOURCE_GROUP_ID}" 1 Specifies the custom role name. 10.7. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally. You must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Export the URL of the RHCOS VHD to an environment variable: USD export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>."rhel-coreos-extensions"."azure-disk".url'` where: <architecture> Specifies the architecture, valid values include x86_64 or aarch64 . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Copy the local VHD to a blob: USD az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "USD{VHD_URL}" Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 10.8. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure's DNS solution is used, so you will create a new public DNS zone for external (internet) visibility and a private DNS zone for internal cluster resolution. Note The public DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the public DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new public DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a public DNS zone that already exists. Create the private DNS zone in the same resource group as the rest of this deployment: USD az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can learn more about configuring a public DNS zone in Azure by visiting that section. 10.9. Creating a VNet in Azure You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Link the VNet template to the private DNS zone: USD az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v "USD{INFRA_ID}-vnet" -e false 10.9.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 10.22. 01_vnet.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "addressPrefix" : "10.0.0.0/16", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetPrefix" : "10.0.0.0/24", "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", "nodeSubnetPrefix" : "10.0.1.0/24", "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/virtualNetworks", "name" : "[variables('virtualNetworkName')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]" ], "properties" : { "addressSpace" : { "addressPrefixes" : [ "[variables('addressPrefix')]" ] }, "subnets" : [ { "name" : "[variables('masterSubnetName')]", "properties" : { "addressPrefix" : "[variables('masterSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } }, { "name" : "[variables('nodeSubnetName')]", "properties" : { "addressPrefix" : "[variables('nodeSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } } ] } }, { "type" : "Microsoft.Network/networkSecurityGroups", "name" : "[variables('clusterNsgName')]", "apiVersion" : "2018-10-01", "location" : "[variables('location')]", "properties" : { "securityRules" : [ { "name" : "apiserver_in", "properties" : { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "6443", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 101, "direction" : "Inbound" } } ] } } ] } 10.10. Deploying the RHCOS cluster image for the Azure infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters storageAccount="USD{CLUSTER_NAME}sa" \ 3 --parameters architecture="<architecture>" 4 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of your Azure storage account. 4 Specify the system architecture. Valid values are x64 (default) or Arm64 . 10.10.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 10.23. 02_storage.json ARM template { "USDschema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "architecture": { "type": "string", "metadata": { "description": "The architecture of the Virtual Machines" }, "defaultValue": "x64", "allowedValues": [ "Arm64", "x64" ] }, "baseName": { "type": "string", "minLength": 1, "metadata": { "description": "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "storageAccount": { "type": "string", "metadata": { "description": "The Storage Account name" } }, "vhdBlobURL": { "type": "string", "metadata": { "description": "URL pointing to the blob where the VHD to be used to create master and worker machines is located" } } }, "variables": { "location": "[resourceGroup().location]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName": "[parameters('baseName')]", "imageNameGen2": "[concat(parameters('baseName'), '-gen2')]", "imageRelease": "1.0.0" }, "resources": [ { "apiVersion": "2021-10-01", "type": "Microsoft.Compute/galleries", "name": "[variables('galleryName')]", "location": "[variables('location')]", "resources": [ { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageName')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V1", "identifier": { "offer": "rhcos", "publisher": "RedHat", "sku": "basic" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageName')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] }, { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageNameGen2')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V2", "identifier": { "offer": "rhcos-gen2", "publisher": "RedHat-gen2", "sku": "gen2" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageNameGen2')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] } ] } ] } 10.11. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 10.11.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 10.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 10.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 10.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 10.12. Creating networking and load balancing components in Azure You must configure networking and load balancing in Microsoft Azure for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters privateDNSZoneName="USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The name of the private DNS zone. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record in the public zone for the API public load balancer. The USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the public DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Create the api DNS record in a new public zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing public zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 10.12.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 10.24. 03_infra.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "privateDNSZoneName" : { "type" : "string", "metadata" : { "description" : "Name of the private DNS zone" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]", "masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]", "skuName": "Standard" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('masterPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('masterPublicIpAddressName')]" } } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('masterLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "dependsOn" : [ "[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]" ], "properties" : { "frontendIPConfigurations" : [ { "name" : "public-lb-ip-v4", "properties" : { "publicIPAddress" : { "id" : "[variables('masterPublicIpAddressID')]" } } } ], "backendAddressPools" : [ { "name" : "[variables('masterLoadBalancerName')]" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]" }, "backendAddressPool" : { "id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, "protocol" : "Tcp", "loadDistribution" : "Default", "idleTimeoutInMinutes" : 30, "frontendPort" : 6443, "backendPort" : 6443, "probe" : { "id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('internalLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "frontendIPConfigurations" : [ { "name" : "internal-lb-ip", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "privateIPAddressVersion" : "IPv4" } } ], "backendAddressPools" : [ { "name" : "internal-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 6443, "backendPort" : 6443, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]" } } }, { "name" : "sint", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 22623, "backendPort" : 22623, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } }, { "name" : "sint-probe", "properties" : { "protocol" : "Https", "port" : 22623, "requestPath": "/healthz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api-int')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } } ] } 10.13. Creating the bootstrap machine in Azure You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'` USD export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv` Export the bootstrap ignition variable: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameter bootstrapVMSize="Standard_D4s_v3" 3 1 The bootstrap Ignition content for the bootstrap cluster. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 Optional: Specify the size of the bootstrap VM. Use a VM size compatible with your specified architecture. If this value is not defined, the default value from the template is set. 10.13.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 10.25. 04_bootstrap.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "bootstrapIgnition" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Bootstrap ignition content for the bootstrap cluster" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "bootstrapVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the Bootstrap Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "vmName" : "[concat(parameters('baseName'), '-bootstrap')]", "nicName" : "[concat(variables('vmName'), '-nic')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "clusterNsgName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]", "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('sshPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "Standard" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('sshPublicIpAddressName')]" } } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[variables('nicName')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" ], "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" }, "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmName')]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('bootstrapVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmName')]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('bootstrapIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmName'),'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : 100 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" } ] } } }, { "apiVersion" : "2018-06-01", "type": "Microsoft.Network/networkSecurityGroups/securityRules", "name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" ], "properties": { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "22", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 100, "direction" : "Inbound" } } ] } 10.14. Creating the control plane machines in Azure You must create the control plane machines in Microsoft Azure for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's Azure Resource Manager (ARM) template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters masterVMSize="Standard_D8s_v3" 3 1 The Ignition content for the control plane nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 Optional: Specify the size of the Control Plane VM. Use a VM size compatible with your specified architecture. If this value is not defined, the default value from the template is set. 10.14.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 10.26. 05_masters.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "masterIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the master nodes" } }, "numberOfMasters" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift masters to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "privateDNSZoneName" : { "type" : "string", "defaultValue" : "", "metadata" : { "description" : "unused" } }, "masterVMSize" : { "type" : "string", "defaultValue" : "Standard_D8s_v3", "metadata" : { "description" : "The size of the Master Virtual Machines" } }, "diskSizeGB" : { "type" : "int", "defaultValue" : 1024, "metadata" : { "description" : "Size of the Master VM OS disk, in GB" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfMasters')]", "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]" } ] }, "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "copy" : { "name" : "nicCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "copy" : { "name" : "vmCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('masterVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('masterIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "caching": "ReadOnly", "writeAcceleratorEnabled": false, "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : "[parameters('diskSizeGB')]" } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": false } } ] } } } ] } 10.15. Wait for bootstrap completion and remove bootstrap resources in Azure After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 10.16. Creating additional worker machines in Azure You can create worker machines in Microsoft Azure for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's ARM template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters nodeVMSize="Standard_D4s_v3" 3 1 The Ignition content for the worker nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 Optional: Specify the size of the compute node VM. Use a VM size compatible with your specified architecture. If this value is not defined, the default value from the template is set. 10.16.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 10.27. 06_workers.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "workerIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the worker nodes" } }, "numberOfNodes" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift compute nodes to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "nodeVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the each Node Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "nodeSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]", "nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]", "infraLoadBalancerName" : "[parameters('baseName')]", "sshKeyPath" : "/home/capi/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfNodes')]", "input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]" } ] }, "resources" : [ { "apiVersion" : "2019-05-01", "name" : "[concat('node', copyIndex())]", "type" : "Microsoft.Resources/deployments", "copy" : { "name" : "nodeCopy", "count" : "[length(variables('vmNames'))]" }, "properties" : { "mode" : "Incremental", "template" : { "USDschema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('nodeSubnetRef')]" } } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "tags" : { "kubernetes.io-cluster-ffranzupi": "owned" }, "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('nodeVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "capi", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('workerIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB": 128 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": true } } ] } } } ] } } } ] } 10.17. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 10.18. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 10.19. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.20. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the public DNS zone. If you are adding this cluster to a new public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 Add a *.apps record to the private DNS zone: Create a *.apps record by using the following command: USD az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300 Add the *.apps record to the private DNS zone by using the following command: USD az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 10.21. Completing an Azure installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 10.22. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service
|
[
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"\"plan\" : { \"name\": \"rh-ocp-worker\", \"product\": \"rh-ocp-worker\", \"publisher\": \"redhat\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"storageProfile\": { \"imageReference\": { \"offer\": \"rh-ocp-worker\", \"publisher\": \"redhat\", \"sku\": \"rh-ocp-worker\", \"version\": \"413.92.2023101700\" } } }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity",
"export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`",
"export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role <custom_role> \\ 1 --scope \"USD{RESOURCE_GROUP_ID}\"",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>.\"rhel-coreos-extensions\".\"azure-disk\".url'`",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters storageAccount=\"USD{CLUSTER_NAME}sa\" \\ 3 --parameters architecture=\"<architecture>\" 4",
"{ \"USDschema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\", \"contentVersion\": \"1.0.0.0\", \"parameters\": { \"architecture\": { \"type\": \"string\", \"metadata\": { \"description\": \"The architecture of the Virtual Machines\" }, \"defaultValue\": \"x64\", \"allowedValues\": [ \"Arm64\", \"x64\" ] }, \"baseName\": { \"type\": \"string\", \"minLength\": 1, \"metadata\": { \"description\": \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"storageAccount\": { \"type\": \"string\", \"metadata\": { \"description\": \"The Storage Account name\" } }, \"vhdBlobURL\": { \"type\": \"string\", \"metadata\": { \"description\": \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\": { \"location\": \"[resourceGroup().location]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\": \"[parameters('baseName')]\", \"imageNameGen2\": \"[concat(parameters('baseName'), '-gen2')]\", \"imageRelease\": \"1.0.0\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"Microsoft.Compute/galleries\", \"name\": \"[variables('galleryName')]\", \"location\": \"[variables('location')]\", \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageName')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V1\", \"identifier\": { \"offer\": \"rhcos\", \"publisher\": \"RedHat\", \"sku\": \"basic\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageName')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] }, { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageNameGen2')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V2\", \"identifier\": { \"offer\": \"rhcos-gen2\", \"publisher\": \"RedHat-gen2\", \"sku\": \"gen2\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageNameGen2')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] } ] } ] }",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip-v4\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"[variables('masterLoadBalancerName')]\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameter bootstrapVMSize=\"Standard_D4s_v3\" 3",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"clusterNsgName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters masterVMSize=\"Standard_D8s_v3\" 3",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"defaultValue\" : \"\", \"metadata\" : { \"description\" : \"unused\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters nodeVMSize=\"Standard_D4s_v3\" 3",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300",
"az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure/installing-restricted-networks-azure-user-provisioned
|
Chapter 15. Distributed tracing
|
Chapter 15. Distributed tracing Distributed tracing allows you to track the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In AMQ Streams on Red Hat Enterprise Linux, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. Tracing complements the available JMX metrics . How AMQ Streams supports tracing Support for tracing is provided for the following clients and components. Kafka clients: Kafka producers and consumers Kafka Streams API applications Kafka components: Kafka Connect Kafka Bridge MirrorMaker MirrorMaker 2.0 To enable tracing, you perform four high-level tasks: Enable a Jaeger tracer. Enable the Interceptors: For Kafka clients, you instrument your application code using the OpenTracing Apache Kafka Client Instrumentation library (included with AMQ Streams). For Kafka components, you set configuration properties for each component. Set tracing environment variables . Deploy the client or component. When instrumented, clients generate trace data. For example, when producing messages or writing offsets to the log. Traces are sampled according to a sampling strategy and then visualized in the Jaeger user interface. Note Tracing is not supported for Kafka brokers. Setting up tracing for applications and systems beyond AMQ Streams is outside the scope of this chapter. To learn more about this subject, search for "inject and extract" in the OpenTracing documentation . Outline of procedures To set up tracing for AMQ Streams, follow these procedures in order: Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing Set up tracing for MirrorMaker, MirrorMaker 2.0, and Kafka Connect: Enable tracing for MirrorMaker Enable tracing for MirrorMaker 2.0 Enable tracing for Kafka Connect Enable tracing for the Kafka Bridge Prerequisites The Jaeger backend components are deployed to the host operating system. For deployment instructions, see the Jaeger deployment documentation . 15.1. Overview of OpenTracing and Jaeger AMQ Streams uses the OpenTracing and Jaeger projects. OpenTracing is an API specification that is independent from the tracing or monitoring system. The OpenTracing APIs are used to instrument application code Instrumented applications generate traces for individual transactions across the distributed system Traces are composed of spans that define specific units of work over time Jaeger is a tracing system for microservices-based distributed systems. Jaeger implements the OpenTracing APIs and provides client libraries for instrumentation The Jaeger user interface allows you to query, filter, and analyze trace data Additional resources OpenTracing Jaeger 15.2. Setting up tracing for Kafka clients Initialize a Jaeger tracer to instrument your client applications for distributed tracing. 15.2.1. Initializing a Jaeger tracer for Kafka clients Configure and initialize a Jaeger tracer using a set of tracing environment variables . Procedure In each client application: Add Maven dependencies for Jaeger to the pom.xml file for the client application: <dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency> Define the configuration of the Jaeger tracer using the tracing environment variables . Create the Jaeger tracer from the environment variables that you defined in step two: Tracer tracer = Configuration.fromEnv().getTracer(); Note For alternative ways to initialize a Jaeger tracer, see the Java OpenTracing library documentation. Register the Jaeger tracer as a global tracer: GlobalTracer.register(tracer); A Jaeger tracer is now initialized for the client application to use. 15.2.2. Instrumenting producers and consumers for tracing Use a Decorator pattern or Interceptors to instrument your Java producer and consumer application code for tracing. Procedure In the application code of each producer and consumer application: Add a Maven dependency for OpenTracing to the producer or consumer's pom.xml file. <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00001</version> </dependency> Instrument your client application code using either a Decorator pattern or Interceptors. To use a Decorator pattern: // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use Interceptors: // Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); Custom span names in a Decorator pattern A span is a logical unit of work in Jaeger, with an operation name, start time, and duration. To use a Decorator pattern to instrument your producer and consumer applications, define custom span names by passing a BiFunction object as an additional argument when creating the TracingKafkaProducer and TracingKafkaConsumer objects. The OpenTracing Apache Kafka Client Instrumentation library includes several built-in span names. Example: Using custom span names to instrument client application code in a Decorator pattern // Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> "CUSTOM_PRODUCER_NAME"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have "CUSTOM_PRODUCER_NAME" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // "receive" -> "RECEIVE" Built-in span names When defining custom span names, you can use the following BiFunctions in the ClientSpanNameProvider class. If no spanNameProvider is specified, CONSUMER_OPERATION_NAME and PRODUCER_OPERATION_NAME are used. Table 15.1. BiFunctions to define custom span names BiFunction Description CONSUMER_OPERATION_NAME, PRODUCER_OPERATION_NAME Returns the operationName as the span name: "receive" for consumers and "send" for producers. CONSUMER_PREFIXED_OPERATION_NAME(String prefix), PRODUCER_PREFIXED_OPERATION_NAME(String prefix) Returns a String concatenation of prefix and operationName . CONSUMER_TOPIC, PRODUCER_TOPIC Returns the name of the topic that the message was sent to or retrieved from in the format (record.topic()) . PREFIXED_CONSUMER_TOPIC(String prefix), PREFIXED_PRODUCER_TOPIC(String prefix) Returns a String concatenation of prefix and the topic name in the format (record.topic()) . CONSUMER_OPERATION_NAME_TOPIC, PRODUCER_OPERATION_NAME_TOPIC Returns the operation name and the topic name: "operationName - record.topic()" . CONSUMER_PREFIXED_OPERATION_NAME_TOPIC(String prefix), PRODUCER_PREFIXED_OPERATION_NAME_TOPIC(String prefix) Returns a String concatenation of prefix and "operationName - record.topic()" . 15.2.3. Instrumenting Kafka Streams applications for tracing Instrument Kafka Streams applications for distributed tracing using a supplier interface. This enables the Interceptors in the application. Procedure In each Kafka Streams application: Add the opentracing-kafka-streams dependency to the Kafka Streams application's pom.xml file. <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.15.redhat-00001</version> </dependency> Create an instance of the TracingKafkaClientSupplier supplier interface: KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); Provide the supplier interface to KafkaStreams : KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); 15.3. Setting up tracing for MirrorMaker and Kafka Connect This section describes how to configure MirrorMaker, MirrorMaker 2.0, and Kafka Connect for distributed tracing. You must enable a Jaeger tracer for each component. 15.3.1. Enabling tracing for MirrorMaker Enable distributed tracing for MirrorMaker by passing the Interceptor properties as consumer and producer configuration parameters. Messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker component. Procedure Configure and enable a Jaeger tracer. Edit the /opt/kafka/config/consumer.properties file. Add the following Interceptor property: consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor Edit the /opt/kafka/config/producer.properties file. Add the following Interceptor property: producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor Start MirrorMaker with the consumer and producer configuration files as parameters: su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --producer.config /opt/kafka/config/producer.properties --num.streams=2 15.3.2. Enabling tracing for MirrorMaker 2.0 Enable distributed tracing for MirrorMaker 2.0 by defining the Interceptor properties in the MirrorMaker 2.0 properties file. Messages are traced between Kafka clusters. The trace data records messages entering and leaving the MirrorMaker 2.0 component. Procedure Configure and enable a Jaeger tracer. Edit the MirrorMaker 2.0 configuration properties file, ./config/connect-mirror-maker.properties , and add the following properties: header.converter=org.apache.kafka.connect.converters.ByteArrayConverter 1 consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor 2 producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor 1 Prevents Kafka Connect from converting message headers (containing trace IDs) to base64 encoding. This ensures that messages are the same in both the source and the target clusters. 2 Enables the Interceptors for MirrorMaker 2.0. Start MirrorMaker 2.0 using the instructions in Section 9.4, "Synchronizing data between Kafka clusters using MirrorMaker 2.0" . Additional resources Chapter 9, Using AMQ Streams with MirrorMaker 2.0 15.3.3. Enabling tracing for Kafka Connect Enable distributed tracing for Kafka Connect using configuration properties. Only messages produced and consumed by Kafka Connect itself are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. Procedure Configure and enable a Jaeger tracer. Edit the relevant Kafka Connect configuration file. If you are running Kafka Connect in standalone mode, edit the /opt/kafka/config/connect-standalone.properties file. If you are running Kafka Connect in distributed mode, edit the /opt/kafka/config/connect-distributed.properties file. Add the following properties to the configuration file: producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor Save the configuration file. Set tracing environment variables and then run Kafka Connect in standalone or distributed mode. The Interceptors in Kafka Connect's internal consumers and producers are now enabled. Additional resources Section 15.5, "Environment variables for tracing" Section 8.1.3, "Running Kafka Connect in standalone mode" Section 8.2.3, "Running distributed Kafka Connect" 15.4. Enabling tracing for the Kafka Bridge Enable distributed tracing for the Kafka Bridge by editing the Kafka Bridge configuration file. You can then deploy a Kafka Bridge instance that is configured for distributed tracing to the host operating system. Traces are generated when: The Kafka Bridge sends messages to HTTP clients and consumes messages from HTTP clients HTTP clients send HTTP requests to send and receive messages through the Kafka Bridge To have end-to-end tracing, you must configure tracing in your HTTP clients. Procedure Edit the config/application.properties file in the Kafka Bridge installation directory. Remove the code comments from the following line: bridge.tracing=jaeger Save the configuration file. Run the bin/kafka_bridge_run.sh script using the configuration properties as a parameter: cd kafka-bridge-0.xy.x.redhat-0000x ./bin/kafka_bridge_run.sh --config-file=config/application.properties The Interceptors in the Kafka Bridge's internal consumers and producers are now enabled. Additional resources Section 12.1.6, "Configuring Kafka Bridge properties" 15.5. Environment variables for tracing Use these environment variables when configuring a Jaeger tracer for Kafka clients and components. Note The tracing environment variables are part of the Jaeger project and are subject to change. For the latest environment variables, see the Jaeger documentation . Table 15.2. Jaeger tracer environment variables Property Required Description JAEGER_SERVICE_NAME Yes The name of the Jaeger tracer service. JAEGER_AGENT_HOST No The hostname for communicating with the jaeger-agent through the User Datagram Protocol (UDP). JAEGER_AGENT_PORT No The port used for communicating with the jaeger-agent through UDP. JAEGER_ENDPOINT No The traces endpoint. Only define this variable if the client application will bypass the jaeger-agent and connect directly to the jaeger-collector . JAEGER_AUTH_TOKEN No The authentication token to send to the endpoint as a bearer token. JAEGER_USER No The username to send to the endpoint if using basic authentication. JAEGER_PASSWORD No The password to send to the endpoint if using basic authentication. JAEGER_PROPAGATION No A comma-separated list of formats to use for propagating the trace context. Defaults to the standard Jaeger format. Valid values are jaeger and b3 . JAEGER_REPORTER_LOG_SPANS No Indicates whether the reporter should also log the spans. JAEGER_REPORTER_MAX_QUEUE_SIZE No The reporter's maximum queue size. JAEGER_REPORTER_FLUSH_INTERVAL No The reporter's flush interval, in ms. Defines how frequently the Jaeger reporter flushes span batches. JAEGER_SAMPLER_TYPE No The sampling strategy to use for client traces: Constant Probabilistic Rate Limiting Remote (the default) To sample all traces, use the Constant sampling strategy with a parameter of 1. For more information, see the Jaeger documentation . JAEGER_SAMPLER_PARAM No The sampler parameter (number). JAEGER_SAMPLER_MANAGER_HOST_PORT No The hostname and port to use if a Remote sampling strategy is selected. JAEGER_TAGS No A comma-separated list of tracer-level tags that are added to all reported spans. The value can also refer to an environment variable using the format USD{envVarName:default} . :default is optional and identifies a value to use if the environment variable cannot be found.
|
[
"<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency>",
"Tracer tracer = Configuration.fromEnv().getTracer();",
"GlobalTracer.register(tracer);",
"<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00001</version> </dependency>",
"// Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"// Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"// Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> \"CUSTOM_PRODUCER_NAME\"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have \"CUSTOM_PRODUCER_NAME\" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // \"receive\" -> \"RECEIVE\"",
"<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.15.redhat-00001</version> </dependency>",
"KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer);",
"KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();",
"consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor",
"producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor",
"su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --producer.config /opt/kafka/config/producer.properties --num.streams=2",
"header.converter=org.apache.kafka.connect.converters.ByteArrayConverter 1 consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor 2 producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor",
"producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor",
"bridge.tracing=jaeger",
"cd kafka-bridge-0.xy.x.redhat-0000x ./bin/kafka_bridge_run.sh --config-file=config/application.properties"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_rhel/assembly-distributed-tracing-str
|
Release notes
|
Release notes OpenShift Container Platform 4.15 Highlights of what is new and what has changed with this OpenShift Container Platform release Red Hat OpenShift Documentation Team
|
[
"olm.og.<operator_group_name>.<admin_edit_or_view>-<hash_value>",
"Bundle unpacking failed. Reason: DeadlineExceeded, Message: Job was active longer than specified deadline",
"cd ~/clusterconfigs/openshift vim openshift-worker-0.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: annotations: bmac.agent-install.openshift.io/installer-args: '[\"--append-karg\", \"ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none\", \"--save-partindex\", \"1\", \"-n\"]' 1 2 3 4 5 inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <fqdn> 6 bmac.agent-install.openshift.io/role: <role> 7 generation: 1 name: openshift-worker-0 namespace: mynamespace spec: automatedCleaningMode: disabled bmc: address: idrac-virtualmedia://<bmc_ip>/redfish/v1/Systems/System.Embedded.1 8 credentialsName: bmc-secret-openshift-worker-0 disableCertificateVerification: true bootMACAddress: 94:6D:AE:AB:EE:E8 bootMode: \"UEFI\" rootDeviceHints: deviceName: /dev/sda",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{infra_env_id}/hosts/USD{host_id}/installer-args -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"args\": [ \"--append-karg\", \"ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none\", 1 2 3 4 5 \"--save-partindex\", \"1\", \"-n\" ] } ' | jq",
"oc adm release info 4.15.47 --pullspecs",
"oc adm release info 4.15.46 --pullspecs",
"oc adm release info 4.15.45 --pullspecs",
"oc adm release info 4.15.44 --pullspecs",
"oc adm release info 4.15.43 --pullspecs",
"oc adm release info 4.15.42 --pullspecs",
"oc adm release info 4.15.41 --pullspecs",
"oc adm release info 4.15.39 --pullspecs",
"oc adm release info 4.15.38 --pullspecs",
"oc adm release info 4.15.37 --pullspecs",
"apiVersion: v1 data: enable-nodeip-debug: \"true\" kind: ConfigMap metadata: name: logging namespace: openshift-vsphere-infra",
"oc adm release info 4.15.36 --pullspecs",
"oc adm release info 4.15.35 --pullspecs",
"oc adm release info 4.15.34 --pullspecs",
"oc adm release info 4.15.33 --pullspecs",
"oc adm release info 4.15.32 --pullspecs",
"oc adm release info 4.15.31 --pullspecs",
"oc adm release info 4.15.30 --pullspecs",
"oc adm release info 4.15.29 --pullspecs",
"oc adm release info 4.15.28 --pullspecs",
"oc adm release info 4.15.27 --pullspecs",
"oc adm release info 4.15.25 --pullspecs",
"oc adm release info 4.15.24 --pullspecs",
"oc adm release info 4.15.23 --pullspecs",
"oc adm release info 4.15.22 --pullspecs",
"oc adm release info 4.15.21 --pullspecs",
"oc adm release info 4.15.20 --pullspecs",
"oc adm release info 4.15.19 --pullspecs",
"oc adm release info 4.15.18 --pullspecs",
"oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.15-route-config-not-supported-in-4.16\":\"true\"}}' --type=merge",
"oc adm release info 4.15.17 --pullspecs",
"oc adm release info 4.15.16 --pullspecs",
"oc adm release info 4.15.15 --pullspecs",
"oc adm release info 4.15.14 --pullspecs",
"oc adm release info 4.15.13 --pullspecs",
"oc adm release info 4.15.12 --pullspecs",
"oc adm release info 4.15.11 --pullspecs",
"oc adm release info 4.15.10 --pullspecs",
"oc adm release info 4.15.9 --pullspecs",
"oc adm release info 4.15.8 --pullspecs",
"oc adm release info 4.15.6 --pullspecs",
"oc adm release info 4.15.5 --pullspecs",
"oc adm release info 4.15.3 --pullspecs",
"oc adm release info 4.15.2 --pullspecs"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/release_notes/index
|
14.3.2. Support for SSH Certificates
|
14.3.2. Support for SSH Certificates Support for certificate authentication of users and hosts using the new OpenSSH certificate format was introduced in Red Hat Enterprise Linux 6.5, in the openssh-5.3p1-94.el6 package. If required, to ensure the latest OpenSSH package is installed, enter the following command as root :
|
[
"~]# yum install openssh Package openssh-5.3p1-104.el6_6.1.i686 already installed and latest version Nothing to do"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-support_for_openssh_certificates
|
Preface
|
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) Azure clusters. Note Only internal OpenShift Data Foundation clusters are supported on Microsoft Azure. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the appropriate deployment process based on your requirement: Deploy OpenShift Data Foundation on Microsoft Azure Deploy standalone Multicloud Object Gateway component
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_microsoft_azure/preface-azure
|
21.6. Troubleshooting with Serial Consoles
|
21.6. Troubleshooting with Serial Consoles Linux kernels can output information to serial ports. This is useful for debugging kernel panics and hardware issues with video devices or headless servers. The subsections in this section cover setting up serial console output for host physical machines using the KVM hypervisor. This section covers how to enable serial console output for fully virtualized guests. Fully virtualized guest serial console output can be viewed with the virsh console command. Be aware fully virtualized guest serial consoles have some limitations. Present limitations include: output data may be dropped or scrambled. The serial port is called ttyS0 on Linux or COM1 on Windows. You must configure the virtualized operating system to output information to the virtual serial port. To output kernel information from a fully virtualized Linux guest into the domain, modify the /boot/grub/grub.conf file. Append the following to the kernel line: console=tty0 console=ttyS0,115200 . Reboot the guest. On the host, access the serial console with the following command: You can also use virt-manager to display the virtual text console. In the guest console window, select Serial 1 in Text Consoles from the View menu.
|
[
"title Red Hat Enterprise Linux Server (2.6.32-36.x86-64) root (hd0,0) kernel /vmlinuz-2.6.32-36.x86-64 ro root=/dev/volgroup00/logvol00 console=tty0 console=ttyS0,115200 initrd /initrd-2.6.32-36.x86-64.img",
"virsh console"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-troubleshooting_-troubleshooting_with_serial_consoles
|
Chapter 5. OAuthClient [oauth.openshift.io/v1]
|
Chapter 5. OAuthClient [oauth.openshift.io/v1] Description OAuthClient describes an OAuth client Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 5.1. Specification Property Type Description accessTokenInactivityTimeoutSeconds integer AccessTokenInactivityTimeoutSeconds overrides the default token inactivity timeout for tokens granted to this client. The value represents the maximum amount of time that can occur between consecutive uses of the token. Tokens become invalid if they are not used within this temporal window. The user will need to acquire a new token to regain access once a token times out. This value needs to be set only if the default set in configuration is not appropriate for this client. Valid values are: - 0: Tokens for this client never time out - X: Tokens time out if there is no activity for X seconds The current minimum allowed value for X is 300 (5 minutes) WARNING: existing tokens' timeout will not be affected (lowered) by changing this value accessTokenMaxAgeSeconds integer AccessTokenMaxAgeSeconds overrides the default access token max age for tokens granted to this client. 0 means no expiration. additionalSecrets array (string) AdditionalSecrets holds other secrets that may be used to identify the client. This is useful for rotation and for service account token validation apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources grantMethod string GrantMethod is a required field which determines how to handle grants for this client. Valid grant handling methods are: - auto: always approves grant requests, useful for trusted clients - prompt: prompts the end user for approval of grant requests, useful for third-party clients kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURIs array (string) RedirectURIs is the valid redirection URIs associated with a client respondWithChallenges boolean RespondWithChallenges indicates whether the client wants authentication needed responses made in the form of challenges instead of redirects scopeRestrictions array ScopeRestrictions describes which scopes this client can request. Each requested scope is checked against each restriction. If any restriction matches, then the scope is allowed. If no restriction matches, then the scope is denied. scopeRestrictions[] object ScopeRestriction describe one restriction on scopes. Exactly one option must be non-nil. secret string Secret is the unique secret associated with a client 5.1.1. .scopeRestrictions Description ScopeRestrictions describes which scopes this client can request. Each requested scope is checked against each restriction. If any restriction matches, then the scope is allowed. If no restriction matches, then the scope is denied. Type array 5.1.2. .scopeRestrictions[] Description ScopeRestriction describe one restriction on scopes. Exactly one option must be non-nil. Type object Property Type Description clusterRole object ClusterRoleScopeRestriction describes restrictions on cluster role scopes literals array (string) ExactValues means the scope has to match a particular set of strings exactly 5.1.3. .scopeRestrictions[].clusterRole Description ClusterRoleScopeRestriction describes restrictions on cluster role scopes Type object Required roleNames namespaces allowEscalation Property Type Description allowEscalation boolean AllowEscalation indicates whether you can request roles and their escalating resources namespaces array (string) Namespaces is the list of namespaces that can be referenced. * means any of them (including *) roleNames array (string) RoleNames is the list of cluster roles that can referenced. * means anything 5.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthclients DELETE : delete collection of OAuthClient GET : list or watch objects of kind OAuthClient POST : create an OAuthClient /apis/oauth.openshift.io/v1/watch/oauthclients GET : watch individual changes to a list of OAuthClient. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthclients/{name} DELETE : delete an OAuthClient GET : read the specified OAuthClient PATCH : partially update the specified OAuthClient PUT : replace the specified OAuthClient /apis/oauth.openshift.io/v1/watch/oauthclients/{name} GET : watch changes to an object of kind OAuthClient. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /apis/oauth.openshift.io/v1/oauthclients HTTP method DELETE Description delete collection of OAuthClient Table 5.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthClient Table 5.3. HTTP responses HTTP code Reponse body 200 - OK OAuthClientList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthClient Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body OAuthClient schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 202 - Accepted OAuthClient schema 401 - Unauthorized Empty 5.2.2. /apis/oauth.openshift.io/v1/watch/oauthclients HTTP method GET Description watch individual changes to a list of OAuthClient. deprecated: use the 'watch' parameter with a list operation instead. Table 5.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/oauth.openshift.io/v1/oauthclients/{name} Table 5.8. Global path parameters Parameter Type Description name string name of the OAuthClient HTTP method DELETE Description delete an OAuthClient Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthClient Table 5.11. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthClient Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthClient Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. Body parameters Parameter Type Description body OAuthClient schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 401 - Unauthorized Empty 5.2.4. /apis/oauth.openshift.io/v1/watch/oauthclients/{name} Table 5.17. Global path parameters Parameter Type Description name string name of the OAuthClient HTTP method GET Description watch changes to an object of kind OAuthClient. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/oauth_apis/oauthclient-oauth-openshift-io-v1
|
Chapter 35. File language
|
Chapter 35. File language The File Expression Language is an extension to the language, adding file related capabilities. These capabilities are related to common use cases working with file path and names. The goal is to allow expressions to be used with the components for setting dynamic file patterns for both consumer and producer. Note The file language is merged with language which means you can use all the file syntax directly within the simple language. 35.1. Dependencies The File language is part of camel-core . When using file with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> 35.2. File Language options The File language supports 2 options, which are listed below. Name Default Java Type Description resultType String Sets the class name of the result type (type from output). trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 35.3. Syntax This language is an extension to the language so the syntax applies also. So the table below only lists the additional file related functions. All the file tokens use the same expression name as the method on the java.io.File object, for instance file:absolute refers to the java.io.File.getAbsolute() method. Notice that not all expressions are supported by the current Exchange. For instance the component supports some options, whereas the File component supports all of them. Expression Type File Consumer File Producer FTP Consumer FTP Producer Description file:name String yes no yes no refers to the file name (is relative to the starting directory, see note below) file:name.ext String yes no yes no refers to the file extension only file:name.ext.single String yes no yes no refers to the file extension. If the file extension has multiple dots, then this expression strips and only returns the last part. file:name.noext String yes no yes no refers to the file name with no extension (is relative to the starting directory, see note below) file:name.noext.single String yes no yes no refers to the file name with no extension (is relative to the starting directory, see note below). If the file extension has multiple dots, then this expression strips only the last part, and keep the others. file:onlyname String yes no yes no refers to the file name only with no leading paths. file:onlyname.noext String yes no yes no refers to the file name only with no extension and with no leading paths. file:onlyname.noext.single String yes no yes no refers to the file name only with no extension and with no leading paths. If the file extension has multiple dots, then this expression strips only the last part, and keep the others. file:ext String yes no yes no refers to the file extension only file:parent String yes no yes no refers to the file parent file:path String yes no yes no refers to the file path file:absolute Boolean yes no no no refers to whether the file is regarded as absolute or relative file:absolute.path String yes no no no refers to the absolute file path file:length Long yes no yes no refers to the file length returned as a Long type file:size Long yes no yes no refers to the file length returned as a Long type file:modified Date yes no yes no Refers to the file last modified returned as a Date type date:_command:pattern_ String yes yes yes yes for date formatting using the java.text.SimpleDateFormat patterns. Is an extension to the language. Additional command is: file (consumers only) for the last modified timestamp of the file. Notice: all the commands from the language can also be used. 35.4. File token example 35.4.1. Relative paths We have a java.io.File handle for the file hello.txt in the following relative directory: .\filelanguage\test . And we configure our endpoint to use this starting directory .\filelanguage . The file tokens will return as: Expression Returns file:name test\hello.txt file:name.ext txt file:name.noext test\hello file:onlyname hello.txt file:onlyname.noext hello file:ext txt file:parent filelanguage\test file:path filelanguage\test\hello.txt file:absolute false file:absolute.path \workspace\camel\camel-core\target\filelanguage\test\hello.txt 35.4.2. Absolute paths We have a java.io.File handle for the file hello.txt in the following absolute directory: \workspace\camel\camel-core\target\filelanguage\test . And we configure out endpoint to use the absolute starting directory \workspace\camel\camel-core\target\filelanguage . The file tokens will return as: Expression Returns file:name test\hello.txt file:name.ext txt file:name.noext test\hello file:onlyname hello.txt file:onlyname.noext hello file:ext txt file:parent \workspace\camel\camel-core\target\filelanguage\test file:path \workspace\camel\camel-core\target\filelanguage\test\hello.txt file:absolute true file:absolute.path \workspace\camel\camel-core\target\filelanguage\test\hello.txt 35.5. Samples You can enter a fixed file name such as myfile.txt : fileName="myfile.txt" Let's assume we use the file consumer to read files and want to move the read files to back up folder with the current date as a sub folder. This can be done using an expression like: fileName="backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak" relative folder names are also supported so suppose the backup folder should be a sibling folder then you can append .. as shown: fileName="../backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak" As this is an extension to the language we have access to all the goodies from this language also, so in this use case we want to use the in.header.type as a parameter in the dynamic expression: fileName="../backup/USD{date:now:yyyyMMdd}/type-USD{in.header.type}/backup-of-USD{file:name.noext}.bak" If you have a custom date you want to use in the expression then Camel supports retrieving dates from the message header: fileName="orders/order-USD{in.header.customerId}-USD{date:in.header.orderDate:yyyyMMdd}.xml" And finally we can also use a bean expression to invoke a POJO class that generates some String output (or convertible to String) to be used: fileName="uniquefile-USD{bean:myguidgenerator.generateid}.txt" Of course all this can be combined in one expression where you can use the and the language in one combined expression. This is pretty powerful for those common file path patterns. 35.6. Spring Boot Auto-Configuration The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>",
"fileName=\"myfile.txt\"",
"fileName=\"backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak\"",
"fileName=\"../backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak\"",
"fileName=\"../backup/USD{date:now:yyyyMMdd}/type-USD{in.header.type}/backup-of-USD{file:name.noext}.bak\"",
"fileName=\"orders/order-USD{in.header.customerId}-USD{date:in.header.orderDate:yyyyMMdd}.xml\"",
"fileName=\"uniquefile-USD{bean:myguidgenerator.generateid}.txt\""
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-file-language-starter
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_multiple_openshift_data_foundation_storage_clusters/making-open-source-more-inclusive
|
4.4. Model Validation
|
4.4. Model Validation Models must be in a valid state in order to be used for data access. Validation of a single model means that it must be in a self-consistent and complete state, meaning that there are no missing pieces and no references to non-existent entities. Validation of multiple models checks that all inter-model dependencies are present and resolvable. Models must always be validated when they are deployed in a VDB for data access purposes. Teiid Designer will automatically validate all models whenever they are saved. Note The Project > Build Automatically menu option must be selected. When editing models, the editor tabs will display a * to indicate that the model has unsaved changes.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/model_validation
|
2.2. Role Mapping
|
2.2. Role Mapping In order to convert the Principals in a Subject into a set of roles used for authorization, a PrincipalRoleMapper must be specified in the global configuration. Red Hat JBoss Data Grid ships with three mappers, and also allows you to provide a custom mapper. Table 2.3. Mappers Mapper Name Java XML Description IdentityRoleMapper org.infinispan.security.impl.IdentityRoleMapper <identity-role-mapper /> Uses the Principal name as the role name. CommonNameRoleMapper org.infinispan.security.impl.CommonRoleMapper <common-name-role-mapper /> If the Principal name is a Distinguished Name (DN), this mapper extracts the Common Name (CN) and uses it as a role name. For example the DN cn=managers,ou=people,dc=example,dc=com will be mapped to the role managers . ClusterRoleMapper org.infinispan.security.impl.ClusterRoleMapper <cluster-role-mapper /> Uses the ClusterRegistry to store principal to role mappings. This allows the use of the CLI's GRANT and DENY commands to add/remove roles to a Principal. Custom Role Mapper <custom-role-mapper class="a.b.c" /> Supply the fully-qualified class name of an implementation of org.infinispan.security.impl.PrincipalRoleMapper Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/security_guide/role_mapping2
|
7.9. at
|
7.9. at 7.9.1. RHBA-2015:0240 - at bug fix update Updated at packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The at packages provide a utility for time-oriented job control. The at utility reads commands from standard input or from a specified file and allows you to specify that the commands will be run at a particular time. Bug Fixes BZ# 994201 Due to incorrect race condition handling in the "atd" daemon, "atd" terminated unexpectedly. With this update, "atd" handles the race condition correctly, so that now "atd" no longer terminates in the described scenario. BZ# 1166882 Previously, the "at" command was not properly checking the return value of the fclose() function call. As a consequence, if the /var/spool/at file system filled up, "at" could leave empty stale files in the spool directory. With this update, "at" properly checks the return value from fclose(), and "at" no longer leaves empty files in spool in the described scenario. Users of at are advised to upgrade to these updated packages, which fix these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-at
|
function::sigset_mask_str
|
function::sigset_mask_str Name function::sigset_mask_str - Returns the string representation of a sigset Synopsis Arguments mask the sigset to convert to string.
|
[
"sigset_mask_str:string(mask:long)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sigset-mask-str
|
Appendix A. RHEL 7 repositories
|
Appendix A. RHEL 7 repositories Before the upgrade, ensure you have appropriate repositories enabled as described in step 4 of the procedure in Preparing a RHEL 7 system for the upgrade . If you plan to use Red Hat Subscription Manager during the upgrade, you must enable the following repositories before the upgrade by using the subscription-manager repos --enable repository_id command: Architecture Repository Repository ID 64-bit Intel Base rhel-7-server-rpms Extras rhel-7-server-extras-rpms IBM POWER8 (little endian) Base rhel-7-for-power-le-rpms Extras rhel-7-for-power-le-extras-rpms IBM Z Base rhel-7-for-system-z-rpms Extras rhel-7-for-system-z-extras-rpms You can enable the following repositories before the upgrade by using the subscription-manager repos --enable repository_id command: Architecture Repository Repository ID 64-bit Intel Optional rhel-7-server-optional-rpms Supplementary rhel-7-server-supplementary-rpms IBM POWER8 (little endian) Optional rhel-7-for-power-le-optional-rpms Supplementary rhel-7-for-power-le-supplementary-rpms IBM Z Optional rhel-7-for-system-z-optional-rpms Supplementary rhel-7-for-system-z-supplementary-rpms Note If you have enabled a RHEL 7 Optional or a RHEL 7 Supplementary repository before an in-place upgrade, Leapp enables the RHEL 8 CodeReady Linux Builder or RHEL 8 Supplementary repositories, respectively. If you decide to use custom repositories, enable them per instructions in Configuring custom repositories .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/upgrading_from_rhel_7_to_rhel_8/appendix-rhel-7-repositories_upgrading-from-rhel-7-to-rhel-8
|
Chapter 3. Troubleshooting networking issues
|
Chapter 3. Troubleshooting networking issues This chapter lists basic troubleshooting procedures connected with networking and chrony for Network Time Protocol (NTP). Prerequisites A running Red Hat Ceph Storage cluster. 3.1. Basic networking troubleshooting Red Hat Ceph Storage depends heavily on a reliable network connection. Red Hat Ceph Storage nodes use the network for communicating with each other. Networking issues can cause many problems with Ceph OSDs, such as them flapping, or being incorrectly reported as down . Networking issues can also cause the Ceph Monitor's clock skew errors. In addition, packet loss, high latency, or limited bandwidth can impact the cluster performance and stability. Prerequisites Root-level access to the node. Procedure Installing the net-tools and telnet packages can help when troubleshooting network issues that can occur in a Ceph storage cluster: Example Log into the cephadm shell and verify that the public_network parameters in the Ceph configuration file include the correct values: Example Exit the shell and verify that the network interfaces are up: Example Verify that the Ceph nodes are able to reach each other using their short host names. Verify this on each node in the storage cluster: Syntax Example If you use a firewall, ensure that Ceph nodes are able to reach each other on their appropriate ports. The firewall-cmd and telnet tools can validate the port status, and if the port is open respectively: Syntax Example Verify that there are no errors on the interface counters. Verify that the network connectivity between nodes has expected latency, and that there is no packet loss. Using the ethtool command: Syntax Example Using the ifconfig command: Example Using the netstat command: Example For performance issues, in addition to the latency checks and to verify the network bandwidth between all nodes of the storage cluster, use the iperf3 tool. The iperf3 tool does a simple point-to-point network bandwidth test between a server and a client. Install the iperf3 package on the Red Hat Ceph Storage nodes you want to check the bandwidth: Example On a Red Hat Ceph Storage node, start the iperf3 server: Example Note The default port is 5201, but can be set using the -P command argument. On a different Red Hat Ceph Storage node, start the iperf3 client: Example This output shows a network bandwidth of 1.1 Gbits/second between the Red Hat Ceph Storage nodes, along with no retransmissions ( Retr ) during the test. Red Hat recommends you validate the network bandwidth between all the nodes in the storage cluster. Ensure that all nodes have the same network interconnect speed. Slower attached nodes might slow down the faster connected ones. Also, ensure that the inter switch links can handle the aggregated bandwidth of the attached nodes: Syntax Example Additional Resources See the Basic Network troubleshooting solution on the Customer Portal for details. See the What is the "ethtool" command and how can I use it to obtain information about my network devices and interfaces for details. See the RHEL network interface dropping packets solutions on the Customer Portal for details. For details, see the What are the performance benchmarking tools available for Red Hat Ceph Storage? solution on the Customer Portal. For more information, see Knowledgebase articles and solutions related to troubleshooting networking issues on the Customer Portal. 3.2. Basic chrony NTP troubleshooting This section includes basic chrony NTP troubleshooting steps. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Verify that the chronyd daemon is running on the Ceph Monitor hosts: Example If chronyd is not running, enable and start it: Example Ensure that chronyd is synchronizing the clocks correctly: Example Additional Resources See the How to troubleshoot chrony issues solution on the Red Hat Customer Portal for advanced chrony NTP troubleshooting steps. See the Clock skew section in the Red Hat Ceph Storage Troubleshooting Guide for further details. See the Checking if chrony is synchronized section for further details.
|
[
"dnf install net-tools dnf install telnet",
"cat /etc/ceph/ceph.conf minimal ceph.conf for 57bddb48-ee04-11eb-9962-001a4a000672 [global] fsid = 57bddb48-ee04-11eb-9962-001a4a000672 mon_host = [v2:10.74.249.26:3300/0,v1:10.74.249.26:6789/0] [v2:10.74.249.163:3300/0,v1:10.74.249.163:6789/0] [v2:10.74.254.129:3300/0,v1:10.74.254.129:6789/0] [mon.host01] public network = 10.74.248.0/21",
"ip link list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:1a:4a:00:06:72 brd ff:ff:ff:ff:ff:ff",
"ping SHORT_HOST_NAME",
"ping host02",
"firewall-cmd --info-zone= ZONE telnet IP_ADDRESS PORT",
"firewall-cmd --info-zone=public public (active) target: default icmp-block-inversion: no interfaces: ens3 sources: services: ceph ceph-mon cockpit dhcpv6-client ssh ports: 9283/tcp 8443/tcp 9093/tcp 9094/tcp 3000/tcp 9100/tcp 9095/tcp protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: telnet 192.168.0.22 9100",
"ethtool -S INTERFACE",
"ethtool -S ens3 | grep errors NIC statistics: rx_fcs_errors: 0 rx_align_errors: 0 rx_frame_too_long_errors: 0 rx_in_length_errors: 0 rx_out_length_errors: 0 tx_mac_errors: 0 tx_carrier_sense_errors: 0 tx_errors: 0 rx_errors: 0",
"ifconfig ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.74.249.26 netmask 255.255.248.0 broadcast 10.74.255.255 inet6 fe80::21a:4aff:fe00:672 prefixlen 64 scopeid 0x20<link> inet6 2620:52:0:4af8:21a:4aff:fe00:672 prefixlen 64 scopeid 0x0<global> ether 00:1a:4a:00:06:72 txqueuelen 1000 (Ethernet) RX packets 150549316 bytes 56759897541 (52.8 GiB) RX errors 0 dropped 176924 overruns 0 frame 0 TX packets 55584046 bytes 62111365424 (57.8 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 9373290 bytes 16044697815 (14.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9373290 bytes 16044697815 (14.9 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0",
"netstat -ai Kernel Interface table Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg ens3 1500 311847720 0 364903 0 114341918 0 0 0 BMRU lo 65536 19577001 0 0 0 19577001 0 0 0 LRU",
"dnf install iperf3",
"iperf3 -s ----------------------------------------------------------- Server listening on 5201 -----------------------------------------------------------",
"iperf3 -c mon Connecting to host mon, port 5201 [ 4] local xx.x.xxx.xx port 52270 connected to xx.x.xxx.xx port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 114 MBytes 954 Mbits/sec 0 409 KBytes [ 4] 1.00-2.00 sec 113 MBytes 945 Mbits/sec 0 409 KBytes [ 4] 2.00-3.00 sec 112 MBytes 943 Mbits/sec 0 454 KBytes [ 4] 3.00-4.00 sec 112 MBytes 941 Mbits/sec 0 471 KBytes [ 4] 4.00-5.00 sec 112 MBytes 940 Mbits/sec 0 471 KBytes [ 4] 5.00-6.00 sec 113 MBytes 945 Mbits/sec 0 471 KBytes [ 4] 6.00-7.00 sec 112 MBytes 937 Mbits/sec 0 488 KBytes [ 4] 7.00-8.00 sec 113 MBytes 947 Mbits/sec 0 520 KBytes [ 4] 8.00-9.00 sec 112 MBytes 939 Mbits/sec 0 520 KBytes [ 4] 9.00-10.00 sec 112 MBytes 939 Mbits/sec 0 520 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 0 sender [ 4] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver iperf Done.",
"ethtool INTERFACE",
"ethtool ens3 Settings for ens3: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Link partner advertised pause frame use: Symmetric Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 1000Mb/s 1 Duplex: Full 2 Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: off Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes 3",
"systemctl status chronyd",
"systemctl enable chronyd systemctl start chronyd",
"chronyc sources chronyc sourcestats chronyc tracking"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/troubleshooting_guide/troubleshooting-networking-issues
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.