title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
βŒ€
url
stringlengths
79
342
Chapter 9. Using AMQ Streams with MirrorMaker 2
Chapter 9. Using AMQ Streams with MirrorMaker 2 Use MirrorMaker 2 to replicate data between two or more active Kafka clusters, within or across data centers. To configure MirrorMaker 2, edit the config/connect-mirror-maker.properties configuration file. If required, you can enable distributed tracing for MirrorMaker 2 . Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Chapter 11, Handling high volumes of messages . Note MirrorMaker 2 has features not supported by the version of MirrorMaker. However, you can configure MirrorMaker 2 to be used in legacy mode . 9.1. Configuring active/active or active/passive modes You can use MirrorMaker 2 in active/passive or active/active cluster configurations. active/active cluster configuration An active/active configuration has two active clusters replicating data bidirectionally. Applications can use either cluster. Each cluster can provide the same data. In this way, you can make the same data available in different geographical locations. As consumer groups are active in both clusters, consumer offsets for replicated topics are not synchronized back to the source cluster. active/passive cluster configuration An active/passive configuration has an active cluster replicating data to a passive cluster. The passive cluster remains on standby. You might use the passive cluster for data recovery in the event of system failure. The expectation is that producers and consumers connect to active clusters only. A MirrorMaker 2 cluster is required at each target destination. 9.1.1. Bidirectional replication (active/active) The MirrorMaker 2 architecture supports bidirectional replication in an active/active cluster configuration. Each cluster replicates the data of the other cluster using the concept of source and remote topics. As the same topics are stored in each cluster, remote topics are automatically renamed by MirrorMaker 2 to represent the source cluster. The name of the originating cluster is prepended to the name of the topic. Figure 9.1. Topic renaming By flagging the originating cluster, topics are not replicated back to that cluster. The concept of replication through remote topics is useful when configuring an architecture that requires data aggregation. Consumers can subscribe to source and remote topics within the same cluster, without the need for a separate aggregation cluster. 9.1.2. Unidirectional replication (active/passive) The MirrorMaker 2 architecture supports unidirectional replication in an active/passive cluster configuration. You can use an active/passive cluster configuration to make backups or migrate data to another cluster. In this situation, you might not want automatic renaming of remote topics. You can override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration. With this configuration applied, topics retain their original names. 9.2. Configuring MirrorMaker 2 connectors Use MirrorMaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters. MirrorMaker 2 consists of the following connectors: MirrorSourceConnector The source connector replicates topics from a source cluster to a target cluster. It also replicates ACLs and is necessary for the MirrorCheckpointConnector to run. MirrorCheckpointConnector The checkpoint connector periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the source and target cluster. MirrorHeartbeatConnector The heartbeat connector periodically checks connectivity between the source and target cluster. The following table describes connector properties and the connectors you configure to use them. Table 9.1. MirrorMaker 2 connector configuration properties Property sourceConnector checkpointConnector heartbeatConnector admin.timeout.ms Timeout for admin tasks, such as detecting new topics. Default is 60000 (1 minute). [βœ“] [βœ“] [βœ“] replication.policy.class Policy to define the remote topic naming convention. Default is org.apache.kafka.connect.mirror.DefaultReplicationPolicy . [βœ“] [βœ“] [βœ“] replication.policy.separator The separator used for topic naming in the target cluster. By default, the separator is set to a dot (.). Separator configuration is only applicable to the DefaultReplicationPolicy replication policy class, which defines remote topic names. The IdentityReplicationPolicy class does not use the property as topics retain their original names. [βœ“] [βœ“] [βœ“] consumer.poll.timeout.ms Timeout when polling the source cluster. Default is 1000 (1 second). [βœ“] [βœ“] offset-syncs.topic.location The location of the offset-syncs topic, which can be the source (default) or target cluster. [βœ“] [βœ“] topic.filter.class Topic filter to select the topics to replicate. Default is org.apache.kafka.connect.mirror.DefaultTopicFilter . [βœ“] [βœ“] config.property.filter.class Topic filter to select the topic configuration properties to replicate. Default is org.apache.kafka.connect.mirror.DefaultConfigPropertyFilter . [βœ“] config.properties.exclude Topic configuration properties that should not be replicated. Supports comma-separated property names and regular expressions. [βœ“] offset.lag.max Maximum allowable (out-of-sync) offset lag before a remote partition is synchronized. Default is 100 . [βœ“] offset-syncs.topic.replication.factor Replication factor for the internal offset-syncs topic. Default is 3 . [βœ“] refresh.topics.enabled Enables check for new topics and partitions. Default is true . [βœ“] refresh.topics.interval.seconds Frequency of topic refresh. Default is 600 (10 minutes). By default, a check for new topics in the source cluster is made every 10 minutes. You can change the frequency by adding refresh.topics.interval.seconds to the source connector configuration. [βœ“] replication.factor The replication factor for new topics. Default is 2 . [βœ“] sync.topic.acls.enabled Enables synchronization of ACLs from the source cluster. Default is true . For more information, see Section 9.5, "ACL rules synchronization" . [βœ“] sync.topic.acls.interval.seconds Frequency of ACL synchronization. Default is 600 (10 minutes). [βœ“] sync.topic.configs.enabled Enables synchronization of topic configuration from the source cluster. Default is true . [βœ“] sync.topic.configs.interval.seconds Frequency of topic configuration synchronization. Default 600 (10 minutes). [βœ“] checkpoints.topic.replication.factor Replication factor for the internal checkpoints topic. Default is 3 . [βœ“] emit.checkpoints.enabled Enables synchronization of consumer offsets to the target cluster. Default is true . [βœ“] emit.checkpoints.interval.seconds Frequency of consumer offset synchronization. Default is 60 (1 minute). [βœ“] group.filter.class Group filter to select the consumer groups to replicate. Default is org.apache.kafka.connect.mirror.DefaultGroupFilter . [βœ“] refresh.groups.enabled Enables check for new consumer groups. Default is true . [βœ“] refresh.groups.interval.seconds Frequency of consumer group refresh. Default is 600 (10 minutes). [βœ“] sync.group.offsets.enabled Enables synchronization of consumer group offsets to the target cluster __consumer_offsets topic. Default is false . [βœ“] sync.group.offsets.interval.seconds Frequency of consumer group offset synchronization. Default is 60 (1 minute). [βœ“] emit.heartbeats.enabled Enables connectivity checks on the target cluster. Default is true . [βœ“] emit.heartbeats.interval.seconds Frequency of connectivity checks. Default is 1 (1 second). [βœ“] heartbeats.topic.replication.factor Replication factor for the internal heartbeats topic. Default is 3 . [βœ“] 9.2.1. Changing the location of the consumer group offsets topic MirrorMaker 2 tracks offsets for consumer groups using internal topics. offset-syncs topic The offset-syncs topic maps the source and target offsets for replicated topic partitions from record metadata. checkpoints topic The checkpoints topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group. As they are used internally by MirrorMaker 2, you do not interact directly with these topics. MirrorCheckpointConnector emits checkpoints for offset tracking. Offsets for the checkpoints topic are tracked at predetermined intervals through configuration. Both topics enable replication to be fully restored from the correct offset position on failover. The location of the offset-syncs topic is the source cluster by default. You can use the offset-syncs.topic.location connector configuration to change this to the target cluster. You need read/write access to the cluster that contains the topic. Using the target cluster as the location of the offset-syncs topic allows you to use MirrorMaker 2 even if you have only read access to the source cluster. 9.2.2. Synchronizing consumer group offsets The __consumer_offsets topic stores information on committed offsets for each consumer group. Offset synchronization periodically transfers the consumer offsets for the consumer groups of a source cluster into the consumer offsets topic of a target cluster. Offset synchronization is particularly useful in an active/passive configuration. If the active cluster goes down, consumer applications can switch to the passive (standby) cluster and pick up from the last transferred offset position. To use topic offset synchronization, enable the synchronization by adding sync.group.offsets.enabled to the checkpoint connector configuration, and setting the property to true . Synchronization is disabled by default. When using the IdentityReplicationPolicy in the source connector, it also has to be configured in the checkpoint connector configuration. This ensures that the mirrored consumer offsets will be applied for the correct topics. Consumer offsets are only synchronized for consumer groups that are not active in the target cluster. If the consumer groups are in the target cluster, the synchronization cannot be performed and an UNKNOWN_MEMBER_ID error is returned. If enabled, the synchronization of offsets from the source cluster is made periodically. You can change the frequency by adding sync.group.offsets.interval.seconds and emit.checkpoints.interval.seconds to the checkpoint connector configuration. The properties specify the frequency in seconds that the consumer group offsets are synchronized, and the frequency of checkpoints emitted for offset tracking. The default for both properties is 60 seconds. You can also change the frequency of checks for new consumer groups using the refresh.groups.interval.seconds property, which is performed every 10 minutes by default. Because the synchronization is time-based, any switchover by consumers to a passive cluster will likely result in some duplication of messages. Note If you have an application written in Java, you can use the RemoteClusterUtils.java utility to synchronize offsets through the application. The utility fetches remote offsets for a consumer group from the checkpoints topic. 9.2.3. Deciding when to use the heartbeat connector The heartbeat connector emits heartbeats to check connectivity between source and target Kafka clusters. An internal heartbeat topic is replicated from the source cluster, which means that the heartbeat connector must be connected to the source cluster. The heartbeat topic is located on the target cluster, which allows it to do the following: Identify all source clusters it is mirroring data from Verify the liveness and latency of the mirroring process This helps to make sure that the process is not stuck or has stopped for any reason. While the heartbeat connector can be a valuable tool for monitoring the mirroring processes between Kafka clusters, it's not always necessary to use it. For example, if your deployment has low network latency or a small number of topics, you might prefer to monitor the mirroring process using log messages or other monitoring tools. If you decide not to use the heartbeat connector, simply omit it from your MirrorMaker 2 configuration. 9.2.4. Aligning the configuration of MirrorMaker 2 connectors To ensure that MirrorMaker 2 connectors work properly, make sure to align certain configuration settings across connectors. Specifically, ensure that the following properties have the same value across all applicable connectors: replication.policy.class replication.policy.separator offset-syncs.topic.location topic.filter.class For example, the value for replication.policy.class must be the same for the source, checkpoint, and heartbeat connectors. Mismatched or missing settings cause issues with data replication or offset syncing, so it's essential to keep all relevant connectors configured with the same settings. 9.3. Connector producer and consumer configuration MirrorMaker 2 connectors use internal producers and consumers. If needed, you can configure these producers and consumers to override the default settings. Important Producer and consumer configuration options depend on the MirrorMaker 2 implementation, and may be subject to change. Producer and consumer configuration applies to all connectors. You specify the configuration in the config/connect-mirror-maker.properties file. Use the properties file to override any default configuration for the producers and consumers in the following format: <source_cluster_name> .consumer. <property> <source_cluster_name> .producer. <property> <target_cluster_name> .consumer. <property> <target_cluster_name> .producer. <property> The following example shows how you configure the producers and consumers. Though the properties are set for all connectors, some configuration properties are only relevant to certain connectors. Example configuration for connector producers and consumers clusters=cluster-1,cluster-2 # ... cluster-1.consumer.fetch.max.bytes=52428800 cluster-2.producer.batch.size=327680 cluster-2.producer.linger.ms=100 cluster-2.producer.request.timeout.ms=30000 9.4. Specifying a maximum number of tasks Connectors create the tasks that are responsible for moving data in and out of Kafka. Each connector comprises one or more tasks that are distributed across a group of worker pods that run the tasks. Increasing the number of tasks can help with performance issues when replicating a large number of partitions or synchronizing the offsets of a large number of consumer groups. Tasks run in parallel. Workers are assigned one or more tasks. A single task is handled by one worker pod, so you don't need more worker pods than tasks. If there are more tasks than workers, workers handle multiple tasks. You can specify the maximum number of connector tasks in your MirrorMaker configuration using the tasks.max property. Without specifying a maximum number of tasks, the default setting is a single task. The heartbeat connector always uses a single task. The number of tasks that are started for the source and checkpoint connectors is the lower value between the maximum number of possible tasks and the value for tasks.max . For the source connector, the maximum number of tasks possible is one for each partition being replicated from the source cluster. For the checkpoint connector, the maximum number of tasks possible is one for each consumer group being replicated from the source cluster. When setting a maximum number of tasks, consider the number of partitions and the hardware resources that support the process. If the infrastructure supports the processing overhead, increasing the number of tasks can improve throughput and latency. For example, adding more tasks reduces the time taken to poll the source cluster when there is a high number of partitions or consumer groups. tasks.max configuration for MirrorMaker connectors clusters=cluster-1,cluster-2 # ... tasks.max = 10 By default, MirrorMaker 2 checks for new consumer groups every 10 minutes. You can adjust the refresh.groups.interval.seconds configuration to change the frequency. Take care when adjusting lower. More frequent checks can have a negative impact on performance. 9.5. ACL rules synchronization If AclAuthorizer is being used, ACL rules that manage access to brokers also apply to remote topics. Users that can read a source topic can read its remote equivalent. Note OAuth 2.0 authorization does not support access to remote topics in this way. 9.6. Running MirrorMaker 2 in dedicated mode Use MirrorMaker 2 to synchronize data between Kafka clusters through configuration. This procedure shows how to configure and run a dedicated single-node MirrorMaker 2 cluster. Dedicated clusters use Kafka Connect worker nodes to mirror data between Kafka clusters. At present, MirrorMaker 2 in dedicated mode only works with a single worker node. Note It is also possible to run MirrorMaker 2 in distributed mode. In distributed mode, MirrorMaker 2 runs as connectors in a Kafka Connect cluster. Kafka provides MirrorMaker source connectors for data replication. If you wish to use the connectors instead of running a dedicated MirrorMaker cluster, the connectors must be configured in the Kafka Connect cluster. For more information, refer to the Apache Kafka documentation . The version of MirrorMaker continues to be supported, by running MirrorMaker 2 in legacy mode . The configuration must specify: Each Kafka cluster Connection information for each cluster, including TLS authentication The replication flow and direction Cluster to cluster Topic to topic Replication rules Committed offset tracking intervals This procedure describes how to implement MirrorMaker 2 by creating the configuration in a properties file, then passing the properties when using the MirrorMaker script file to set up the connections. You can specify the topics and consumer groups you wish to replicate from a source cluster. You specify the names of the source and target clusters, then specify the topics and consumer groups to replicate. In the following example, topics and consumer groups are specified for replication from cluster 1 to 2. Example configuration to replicate specific topics and consumer groups clusters=cluster-1,cluster-2 cluster-1->cluster-2.topics = topic-1, topic-2 cluster-1->cluster-2.groups = group-1, group-2 You can provide a list of names or use a regular expression. By default, all topics and consumer groups are replicated if you do not set these properties. You can also replicate all topics and consumer groups by using .* as a regular expression. However, try to specify only the topics and consumer groups you need to avoid causing any unnecessary extra load on the cluster. Before you begin A sample configuration properties file is provided in ./config/connect-mirror-maker.properties . Prerequisites You need AMQ Streams installed on the hosts of each Kafka cluster node you are replicating. Procedure Open the sample properties file in a text editor, or create a new one, and edit the file to include connection information and the replication flows for each Kafka cluster. The following example shows a configuration to connect two clusters, cluster-1 and cluster-2 , bidirectionally. Cluster names are configurable through the clusters property. Example MirrorMaker 2 configuration clusters=cluster-1,cluster-2 1 cluster-1.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_one>:443 2 cluster-1.security.protocol=SSL 3 cluster-1.ssl.truststore.password=<truststore_name> cluster-1.ssl.truststore.location=<path_to_truststore>/truststore.cluster-1.jks_ cluster-1.ssl.keystore.password=<keystore_name> cluster-1.ssl.keystore.location=<path_to_keystore>/user.cluster-1.p12 cluster-2.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_two>:443 4 cluster-2.security.protocol=SSL 5 cluster-2.ssl.truststore.password=<truststore_name> cluster-2.ssl.truststore.location=<path_to_truststore>/truststore.cluster-2.jks_ cluster-2.ssl.keystore.password=<keystore_name> cluster-2.ssl.keystore.location=<path_to_keystore>/user.cluster-2.p12 cluster-1->cluster-2.enabled=true 6 cluster-2->cluster-1.enabled=true 7 cluster-1->cluster-2.topics=.* 8 cluster-2->cluster-1.topics=topic-1, topic-2 9 cluster-1->cluster-2.groups=.* 10 cluster-2->cluster-1.groups=group-1, group-2 11 replication.policy.separator=- 12 sync.topic.acls.enabled=false 13 refresh.topics.interval.seconds=60 14 refresh.groups.interval.seconds=60 15 1 Each Kafka cluster is identified with its alias. 2 Connection information for cluster-1 , using the bootstrap address and port 443 . Both clusters use port 443 to connect to Kafka using OpenShift Routes . 3 The ssl. properties define TLS configuration for cluster-1 . 4 Connection information for cluster-2 . 5 The ssl. properties define the TLS configuration for cluster-2 . 6 Replication flow enabled from cluster-1 to cluster-2 . 7 Replication flow enabled from cluster-2 to cluster-1 . 8 Replication of all topics from cluster-1 to cluster-2 . The source connector replicates the specified topics. The checkpoint connector tracks offsets for the specified topics. 9 Replication of specific topics from cluster-2 to cluster-1 . 10 Replication of all consumer groups from cluster-1 to cluster-2 . The checkpoint connector replicates the specified consumer groups. 11 Replication of specific consumer groups from cluster-2 to cluster-1 . 12 Defines the separator used for the renaming of remote topics. 13 When enabled, ACLs are applied to synchronized topics. The default is false . 14 The period between checks for new topics to synchronize. 15 The period between checks for new consumer groups to synchronize. OPTION: If required, add a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is used for active/passive backups and data migration. replication.policy.class=org.apache.kafka.connect.mirror.IdentityReplicationPolicy OPTION: If you want to synchronize consumer group offsets, add configuration to enable and manage the synchronization: refresh.groups.interval.seconds=60 sync.group.offsets.enabled=true 1 sync.group.offsets.interval.seconds=60 2 emit.checkpoints.interval.seconds=60 3 1 Optional setting to synchronize consumer group offsets, which is useful for recovery in an active/passive configuration. Synchronization is not enabled by default. 2 If the synchronization of consumer group offsets is enabled, you can adjust the frequency of the synchronization. 3 Adjusts the frequency of checks for offset tracking. If you change the frequency of offset synchronization, you might also need to adjust the frequency of these checks. Start ZooKeeper and Kafka in the target clusters: su - kafka /opt/kafka/bin/zookeeper-server-start.sh -daemon \ /opt/kafka/config/zookeeper.properties /opt/kafka/bin/kafka-server-start.sh -daemon \ /opt/kafka/config/server.properties Start MirrorMaker with the cluster connection configuration and replication policies you defined in your properties file: /opt/kafka/bin/connect-mirror-maker.sh \ /opt/kafka/config/connect-mirror-maker.properties MirrorMaker sets up connections between the clusters. For each target cluster, verify that the topics are being replicated: /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --list 9.7. Using MirrorMaker 2 in legacy mode This procedure describes how to configure MirrorMaker 2 to use it in legacy mode. Legacy mode supports the version of MirrorMaker. The MirrorMaker script /opt/kafka/bin/kafka-mirror-maker.sh can run MirrorMaker 2 in legacy mode. Important Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, Kafka MirrorMaker 1 has been deprecated in AMQ Streams as well. Kafka MirrorMaker 1 will be removed from AMQ Streams when we adopt Apache Kafka 4.0.0. As a replacement, use MirrorMaker 2 with the IdentityReplicationPolicy . Prerequisites You need the properties files you currently use with the legacy version of MirrorMaker. /opt/kafka/config/consumer.properties /opt/kafka/config/producer.properties Procedure Edit the MirrorMaker consumer.properties and producer.properties files to turn off MirrorMaker 2 features. For example: replication.policy.class=org.apache.kafka.mirror.LegacyReplicationPolicy 1 refresh.topics.enabled=false 2 refresh.groups.enabled=false emit.checkpoints.enabled=false emit.heartbeats.enabled=false sync.topic.configs.enabled=false sync.topic.acls.enabled=false 1 Emulate the version of MirrorMaker. 2 MirrorMaker 2 features disabled, including the internal checkpoint and heartbeat topics Save the changes and restart MirrorMaker with the properties files you used with the version of MirrorMaker: su - kafka /opt/kafka/bin/kafka-mirror-maker.sh \ --consumer.config /opt/kafka/config/consumer.properties \ --producer.config /opt/kafka/config/producer.properties \ --num.streams=2 The consumer properties provide the configuration for the source cluster and the producer properties provide the target cluster configuration. MirrorMaker sets up connections between the clusters. Start ZooKeeper and Kafka in the target cluster: su - kafka /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties For the target cluster, verify that the topics are being replicated: /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --list
[ "clusters=cluster-1,cluster-2 cluster-1.consumer.fetch.max.bytes=52428800 cluster-2.producer.batch.size=327680 cluster-2.producer.linger.ms=100 cluster-2.producer.request.timeout.ms=30000", "clusters=cluster-1,cluster-2 tasks.max = 10", "clusters=cluster-1,cluster-2 cluster-1->cluster-2.topics = topic-1, topic-2 cluster-1->cluster-2.groups = group-1, group-2", "clusters=cluster-1,cluster-2 1 cluster-1.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_one>:443 2 cluster-1.security.protocol=SSL 3 cluster-1.ssl.truststore.password=<truststore_name> cluster-1.ssl.truststore.location=<path_to_truststore>/truststore.cluster-1.jks_ cluster-1.ssl.keystore.password=<keystore_name> cluster-1.ssl.keystore.location=<path_to_keystore>/user.cluster-1.p12 cluster-2.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_two>:443 4 cluster-2.security.protocol=SSL 5 cluster-2.ssl.truststore.password=<truststore_name> cluster-2.ssl.truststore.location=<path_to_truststore>/truststore.cluster-2.jks_ cluster-2.ssl.keystore.password=<keystore_name> cluster-2.ssl.keystore.location=<path_to_keystore>/user.cluster-2.p12 cluster-1->cluster-2.enabled=true 6 cluster-2->cluster-1.enabled=true 7 cluster-1->cluster-2.topics=.* 8 cluster-2->cluster-1.topics=topic-1, topic-2 9 cluster-1->cluster-2.groups=.* 10 cluster-2->cluster-1.groups=group-1, group-2 11 replication.policy.separator=- 12 sync.topic.acls.enabled=false 13 refresh.topics.interval.seconds=60 14 refresh.groups.interval.seconds=60 15", "replication.policy.class=org.apache.kafka.connect.mirror.IdentityReplicationPolicy", "refresh.groups.interval.seconds=60 sync.group.offsets.enabled=true 1 sync.group.offsets.interval.seconds=60 2 emit.checkpoints.interval.seconds=60 3", "su - kafka /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties", "/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties", "/opt/kafka/bin/connect-mirror-maker.sh /opt/kafka/config/connect-mirror-maker.properties", "/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --list", "replication.policy.class=org.apache.kafka.mirror.LegacyReplicationPolicy 1 refresh.topics.enabled=false 2 refresh.groups.enabled=false emit.checkpoints.enabled=false emit.heartbeats.enabled=false sync.topic.configs.enabled=false sync.topic.acls.enabled=false", "su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --producer.config /opt/kafka/config/producer.properties --num.streams=2", "su - kafka /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties", "su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties", "/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --list" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_amq_streams_on_rhel/assembly-mirrormaker-str
Chapter 8. DNS [config.openshift.io/v1]
Chapter 8. DNS [config.openshift.io/v1] Description DNS holds cluster-wide information about DNS. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 8.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description baseDomain string baseDomain is the base domain of the cluster. All managed DNS records will be sub-domains of this base. For example, given the base domain openshift.example.com , an API server DNS record may be created for cluster-api.openshift.example.com . Once set, this field cannot be changed. platform object platform holds configuration specific to the underlying infrastructure provider for DNS. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. privateZone object privateZone is the location where all the DNS records that are only available internally to the cluster exist. If this field is nil, no private records should be created. Once set, this field cannot be changed. publicZone object publicZone is the location where all the DNS records that are publicly accessible to the internet exist. If this field is nil, no public records should be created. Once set, this field cannot be changed. 8.1.2. .spec.platform Description platform holds configuration specific to the underlying infrastructure provider for DNS. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. Type object Required type Property Type Description aws object aws contains DNS configuration specific to the Amazon Web Services cloud provider. type string type is the underlying infrastructure provider for the cluster. Allowed values: "", "AWS". Individual components may not support all platforms, and must handle unrecognized platforms with best-effort defaults. 8.1.3. .spec.platform.aws Description aws contains DNS configuration specific to the Amazon Web Services cloud provider. Type object Property Type Description privateZoneIAMRole string privateZoneIAMRole contains the ARN of an IAM role that should be assumed when performing operations on the cluster's private hosted zone specified in the cluster DNS config. When left empty, no role should be assumed. 8.1.4. .spec.privateZone Description privateZone is the location where all the DNS records that are only available internally to the cluster exist. If this field is nil, no private records should be created. Once set, this field cannot be changed. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 8.1.5. .spec.publicZone Description publicZone is the location where all the DNS records that are publicly accessible to the internet exist. If this field is nil, no public records should be created. Once set, this field cannot be changed. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 8.1.6. .status Description status holds observed values from the cluster. They may not be overridden. Type object 8.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/dnses DELETE : delete collection of DNS GET : list objects of kind DNS POST : create a DNS /apis/config.openshift.io/v1/dnses/{name} DELETE : delete a DNS GET : read the specified DNS PATCH : partially update the specified DNS PUT : replace the specified DNS /apis/config.openshift.io/v1/dnses/{name}/status GET : read status of the specified DNS PATCH : partially update status of the specified DNS PUT : replace status of the specified DNS 8.2.1. /apis/config.openshift.io/v1/dnses Table 8.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of DNS Table 8.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DNS Table 8.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.5. HTTP responses HTTP code Reponse body 200 - OK DNSList schema 401 - Unauthorized Empty HTTP method POST Description create a DNS Table 8.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.7. Body parameters Parameter Type Description body DNS schema Table 8.8. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 202 - Accepted DNS schema 401 - Unauthorized Empty 8.2.2. /apis/config.openshift.io/v1/dnses/{name} Table 8.9. Global path parameters Parameter Type Description name string name of the DNS Table 8.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a DNS Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.12. Body parameters Parameter Type Description body DeleteOptions schema Table 8.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DNS Table 8.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.15. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DNS Table 8.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.17. Body parameters Parameter Type Description body Patch schema Table 8.18. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DNS Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body DNS schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 401 - Unauthorized Empty 8.2.3. /apis/config.openshift.io/v1/dnses/{name}/status Table 8.22. Global path parameters Parameter Type Description name string name of the DNS Table 8.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified DNS Table 8.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.25. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DNS Table 8.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.27. Body parameters Parameter Type Description body Patch schema Table 8.28. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DNS Table 8.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.30. Body parameters Parameter Type Description body DNS schema Table 8.31. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/config_apis/dns-config-openshift-io-v1
Machine APIs
Machine APIs OpenShift Container Platform 4.12 Reference guide for machine APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_apis/index
Chapter 5. Configuring the Nagios Plugins for Ceph
Chapter 5. Configuring the Nagios Plugins for Ceph Configure the Nagios plug-ins for Red Hat Ceph Storage cluster. Prerequisites User-level access to the Ceph Monitor node. A running Red Hat Ceph Storage cluster. Access to the Nagios Core Server. Procedure Log in to the monitor server and create a Ceph key and keyring for Nagios. Each plug-in will require authentication. Repeat this procedure for each node that contains a plug-in. Add a command for the check_ceph_health plug-in: Example Enable and restart the nrpe service: Repeat this procedure for each Ceph plug-in applicable to the node. Return to the Nagios Core server and define a check_nrpe command for the NRPE plug-in: On the Nagios Core server, edit the configuration file for the node and add a service for the Ceph plug-in. Example Note The check_command setting uses check_nrpe! before the Ceph plug-in name. This tells NRPE to execute the check_ceph_health command on the remote node. Repeat this procedure for each plug-in applicable to the node. Restart the Nagios Core server: Before proceeding with additional configuration, ensure that the plug-ins are working. Example Note The check_ceph_health plug-in performs the equivalent of the ceph health command. Additional Resources See the Ceph Nagios plugins web page for usage.
[ "ssh mon cd /etc/ceph ceph auth get-or-create client.nagios mon 'allow r' > client.nagios.keyring", "vi /usr/local/nagios/etc/nrpe.cfg", "command[check_ceph_health]=/usr/lib/nagios/plugins/check_ceph_health --id nagios --keyring /etc/ceph/client.nagios.keyring", "systemctl enable nrpe systemctl restart nrpe", "cd /usr/local/nagios/etc/objects vi commands.cfg", "define command{ command_name check_nrpe command_line USER1 /check_nrpe -H HOSTADDRESS -c ARG1 }", "vi /usr/local/nagios/etc/objects/mon.cfg", "define service { use generic-service host_name mon service_description Ceph Health Check check_command check_nrpe!check_ceph_health }", "systemctl restart nagios", "/usr/lib/nagios/plugins/check_ceph_health --id nagios --keyring /etc/ceph/client.nagios.keyring" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/monitoring_ceph_with_nagios_guide/configuring-the-nagios-plugins-for-ceph_nagios
Chapter 16. ReplicaSet [apps/v1]
Chapter 16. ReplicaSet [apps/v1] Description ReplicaSet ensures that a specified number of pod replicas are running at any given time. Type object 16.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta If the Labels of a ReplicaSet are empty, they are defaulted to be the same as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ReplicaSetSpec is the specification of a ReplicaSet. status object ReplicaSetStatus represents the current status of a ReplicaSet. 16.1.1. .spec Description ReplicaSetSpec is the specification of a ReplicaSet. Type object Required selector Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) replicas integer Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller selector LabelSelector Selector is a label query over pods that should match the replica count. Label keys and values that must match in order to be controlled by this replica set. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template PodTemplateSpec Template is the object that describes the pod that will be created if insufficient replicas are detected. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template 16.1.2. .status Description ReplicaSetStatus represents the current status of a ReplicaSet. Type object Required replicas Property Type Description availableReplicas integer The number of available replicas (ready for at least minReadySeconds) for this replica set. conditions array Represents the latest available observations of a replica set's current state. conditions[] object ReplicaSetCondition describes the state of a replica set at a certain point. fullyLabeledReplicas integer The number of pods that have labels matching the labels of the pod template of the replicaset. observedGeneration integer ObservedGeneration reflects the generation of the most recently observed ReplicaSet. readyReplicas integer readyReplicas is the number of pods targeted by this ReplicaSet with a Ready Condition. replicas integer Replicas is the most recently observed number of replicas. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller 16.1.3. .status.conditions Description Represents the latest available observations of a replica set's current state. Type array 16.1.4. .status.conditions[] Description ReplicaSetCondition describes the state of a replica set at a certain point. Type object Required type status Property Type Description lastTransitionTime Time The last time the condition transitioned from one status to another. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of replica set condition. 16.2. API endpoints The following API endpoints are available: /apis/apps/v1/replicasets GET : list or watch objects of kind ReplicaSet /apis/apps/v1/watch/replicasets GET : watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/replicasets DELETE : delete collection of ReplicaSet GET : list or watch objects of kind ReplicaSet POST : create a ReplicaSet /apis/apps/v1/watch/namespaces/{namespace}/replicasets GET : watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/replicasets/{name} DELETE : delete a ReplicaSet GET : read the specified ReplicaSet PATCH : partially update the specified ReplicaSet PUT : replace the specified ReplicaSet /apis/apps/v1/watch/namespaces/{namespace}/replicasets/{name} GET : watch changes to an object of kind ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status GET : read status of the specified ReplicaSet PATCH : partially update status of the specified ReplicaSet PUT : replace status of the specified ReplicaSet 16.2.1. /apis/apps/v1/replicasets Table 16.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ReplicaSet Table 16.2. HTTP responses HTTP code Reponse body 200 - OK ReplicaSetList schema 401 - Unauthorized Empty 16.2.2. /apis/apps/v1/watch/replicasets Table 16.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. Table 16.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 16.2.3. /apis/apps/v1/namespaces/{namespace}/replicasets Table 16.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 16.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ReplicaSet Table 16.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 16.8. Body parameters Parameter Type Description body DeleteOptions schema Table 16.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ReplicaSet Table 16.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 16.11. HTTP responses HTTP code Reponse body 200 - OK ReplicaSetList schema 401 - Unauthorized Empty HTTP method POST Description create a ReplicaSet Table 16.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.13. Body parameters Parameter Type Description body ReplicaSet schema Table 16.14. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 202 - Accepted ReplicaSet schema 401 - Unauthorized Empty 16.2.4. /apis/apps/v1/watch/namespaces/{namespace}/replicasets Table 16.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 16.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. Table 16.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 16.2.5. /apis/apps/v1/namespaces/{namespace}/replicasets/{name} Table 16.18. Global path parameters Parameter Type Description name string name of the ReplicaSet namespace string object name and auth scope, such as for teams and projects Table 16.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ReplicaSet Table 16.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 16.21. Body parameters Parameter Type Description body DeleteOptions schema Table 16.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ReplicaSet Table 16.23. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ReplicaSet Table 16.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 16.25. Body parameters Parameter Type Description body Patch schema Table 16.26. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ReplicaSet Table 16.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.28. Body parameters Parameter Type Description body ReplicaSet schema Table 16.29. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty 16.2.6. /apis/apps/v1/watch/namespaces/{namespace}/replicasets/{name} Table 16.30. Global path parameters Parameter Type Description name string name of the ReplicaSet namespace string object name and auth scope, such as for teams and projects Table 16.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 16.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 16.2.7. /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status Table 16.33. Global path parameters Parameter Type Description name string name of the ReplicaSet namespace string object name and auth scope, such as for teams and projects Table 16.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ReplicaSet Table 16.35. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ReplicaSet Table 16.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 16.37. Body parameters Parameter Type Description body Patch schema Table 16.38. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ReplicaSet Table 16.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.40. Body parameters Parameter Type Description body ReplicaSet schema Table 16.41. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/workloads_apis/replicaset-apps-v1
Storage
Storage OpenShift Container Platform 4.16 Configuring and managing storage in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: ephemeral-storage: \"2Gi\" 1 limits: ephemeral-storage: \"4Gi\" 2 volumeMounts: - name: ephemeral mountPath: \"/tmp\" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: \"2Gi\" limits: ephemeral-storage: \"4Gi\" volumeMounts: - name: ephemeral mountPath: \"/tmp\" volumes: - name: ephemeral emptyDir: {}", "df -h /var/lib", "Filesystem Size Used Avail Use% Mounted on /dev/disk/by-partuuid/4cd1448a-01 69G 32G 34G 49% /", "oc delete pv <pv-name>", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s", "oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:", "oc get pv <pv-claim>", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:", "kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3", "apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi", "apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3", "securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: \"OnRootMismatch\" 1", "cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: \"true\" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF", "cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF", "cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2", "oc create -f <machine-set-name>.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: \"2000\" 2 diskMbpsReadWrite: \"320\" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3", "apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - \"sleep\" - \"infinity\" volumeMounts: - mountPath: \"/mnt/azure\" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>", "oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false", "apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5", "oc create -f cinder-persistentvolume.yaml", "oc create serviceaccount <service_account>", "oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>", "apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4", "{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }", "{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar", "\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7", "oc get pv", "NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m", "ls -lZ /opt/nfs -d", "drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs", "id nfsnobody", "uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)", "spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2", "spec: containers: 1 - name: securityContext: runAsUser: 65534 2", "setsebool -P virt_use_nfs 1", "/<example_fs> *(rw,root_squash)", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3", "oc create -f pvc.yaml", "vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk", "shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk", "apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5", "oc create -f pv1.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4", "oc create -f pvc1.yaml", "oc adm new-project openshift-local-storage", "oc annotate namespace openshift-local-storage openshift.io/node-selector=''", "oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc apply -f openshift-local-storage.yaml", "oc -n openshift-local-storage get pods", "NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m", "oc get csvs -n openshift-local-storage", "NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Filesystem 5 fsType: xfs 6 devicePaths: 7 - /path/to/device 8", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"local-sc\" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Block 5 devicePaths: 6 - /path/to/device 7", "oc create -f <local-volume>.yaml", "oc get all -n openshift-local-storage", "NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "oc create -f <example-pv>.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12h", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4", "oc create -f <local-pvc>.yaml", "apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name 3", "oc create -f <local-pod>.yaml", "apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: local-sc 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM", "oc apply -f local-volume-set.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"local-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg", "spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists", "oc edit localvolume <name> -n openshift-local-storage", "oc delete pv <pv-name>", "oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1", "oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces", "oc delete pv <pv-name>", "oc delete project openshift-local-storage", "apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi9/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''", "apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4", "oc create -f pv.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual", "oc create -f pvc.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage", "oc create -f <file_name>", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage", "oc create -f <file_name>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file_name>", "oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase 4.13.0-202301261535 Succeeded", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.16 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 6 packages: - name: lvms-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {}", "oc create ns <namespace>", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: 1 matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: 2 apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave objectDefinition: 3 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: musthave objectDefinition: 4 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low", "oc create -f <file_name> -n <namespace>", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: tolerations: - effect: NoSchedule key: xyz operator: Equal value: \"true\" storage: deviceClasses: - name: vg1 fstype: ext4 1 default: true nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: mykey operator: In values: - ssd deviceSelector: 3 paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 forceWipeDevicesAndDestroyAllData: true thinPoolConfig: name: thin-pool-1 sizePercent: 90 4 overprovisionRatio: 10 chunkSize: 128Ki 5 chunkSizeCalculationPolicy: Static 6", "lsblk --paths --json -o NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,STATE,KNAME,SERIAL,PARTLABEL,FSTYPE", "pvs <device-name> 1", "cat /proc/1/mountinfo | grep <device-name> 1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 1 fstype: ext4 2 default: true deviceSelector: 3 forceWipeDevicesAndDestroyAllData: false 4 thinPoolConfig: 5 nodeSelector: 6", "pvs -S vgname=<vg_name> 1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 nodeSelector: 2 deviceSelector: 3 thinPoolConfig: 4", "oc create -f <file_name>", "lvmcluster/lvmcluster created", "oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status}' -n <namespace>", "{\"deviceClassStatuses\": 1 [ { \"name\": \"vg1\", \"nodeStatus\": [ 2 { \"devices\": [ 3 \"/dev/nvme0n1\", \"/dev/nvme1n1\", \"/dev/nvme2n1\" ], \"node\": \"kube-node\", 4 \"status\": \"Ready\" 5 } ] } ] \"state\":\"Ready\"} 6", "status: deviceClassStatuses: - name: vg1 nodeStatus: - node: my-node-1.example.com reason: no available devices found for volume group status: Failed state: Failed", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m", "oc get volumesnapshotclass", "NAME DRIVER DELETIONPOLICY AGE lvms-vg1 topolvm.io Delete 24h", "apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms namespace: openshift-storage spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 deviceSelector: 2 thinPoolConfig: 3 nodeSelector: 4 remediationAction: enforce severity: low", "oc create -f <file_name> -n <cluster_namespace> 1", "oc delete lvmcluster <lvmclustername> -n openshift-storage", "oc get lvmcluster -n <namespace>", "No resources found in openshift-storage namespace.", "oc delete -f <file_name> -n <cluster_namespace> 1", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-delete annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal spec: remediationAction: enforce 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-delete placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-delete subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-delete --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-delete spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: 3 matchExpressions: - key: mykey operator: In values: - myvalue", "oc create -f <file_name> -n <namespace>", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-inform annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal-inform spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-check placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-check subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-inform --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-check spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue", "oc create -f <file_name> -n <namespace>", "oc get policy -n <namespace>", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15m", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-block-1 1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Block 2 resources: requests: storage: 10Gi 3 limits: storage: 20Gi 4 storageClassName: lvms-vg1 5", "oc create -f <file_name> -n <application_namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1 Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc edit <lvmcluster_file_name> -n <namespace>", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1", "oc edit -f <file_name> -ns <namespace> 1", "apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1", "oc patch <pvc_name> -n <application_namespace> -p \\ 1 '{ \"spec\": { \"resources\": { \"requests\": { \"storage\": \"<desired_size>\" }}}} --type=merge' 2", "oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage}", "oc delete pvc <pvc_name> -n <namespace>", "oc get pvc -n <namespace>", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: lvm-block-1-snap 1 spec: source: persistentVolumeClaimName: lvm-block-1 2 volumeSnapshotClassName: lvms-vg1 3", "oc get volumesnapshotclass", "oc create -f <file_name> -n <namespace>", "oc get volumesnapshot -n <namespace>", "NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE lvm-block-1-snap true lvms-test-1 1Gi lvms-vg1 snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91 19s 19s", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-block-1-restore spec: accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi 1 storageClassName: lvms-vg1 2 dataSource: name: lvm-block-1-snap 3 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io", "oc create -f <file_name> -n <namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-restore Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc delete volumesnapshot <volume_snapshot_name> -n <namespace>", "oc get volumesnapshot -n <namespace>", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-pvc-clone spec: accessModes: - ReadWriteOnce storageClassName: lvms-vg1 1 volumeMode: Filesystem 2 dataSource: kind: PersistentVolumeClaim name: lvm-pvc 3 resources: requests: storage: 1Gi 4", "oc create -f <file_name> -n <namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-clone Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc delete pvc <clone_pvc_name> -n <namespace>", "oc get pvc -n <namespace>", "oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{\"spec\":{\"channel\":\"<update_channel>\"}}' 1", "oc get events -n openshift-storage", "8m13s Normal RequirementsUnknown clusterserviceversion/lvms-operator.v4.16 requirements not yet checked 8m11s Normal RequirementsNotMet clusterserviceversion/lvms-operator.v4.16 one or more requirements couldn't be found 7m50s Normal AllRequirementsMet clusterserviceversion/lvms-operator.v4.16 all requirements found, attempting install 7m50s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.16 waiting for install components to report healthy 7m49s Normal InstallWaiting clusterserviceversion/lvms-operator.v4.16 installing: waiting for deployment lvms-operator to become ready: deployment \"lvms-operator\" waiting for 1 outdated replica(s) to be terminated 7m39s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.16 install strategy completed with no errors", "oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}'", "lvms-operator.v4.16", "openshift.io/cluster-monitoring=true", "oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV", "currentCSV: lvms-operator.v4.15.3", "oc delete subscription.operators.coreos.com lvms-operator -n <namespace>", "subscription.operators.coreos.com \"lvms-operator\" deleted", "oc delete clusterserviceversion <currentCSV> -n <namespace> 1", "clusterserviceversion.operators.coreos.com \"lvms-operator.v4.15.3\" deleted", "oc get csv -n <namespace>", "oc delete -f <policy> -n <namespace> 1", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-uninstall-lvms spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-uninstall-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-uninstall-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: uninstall-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: uninstall-lvms spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: uninstall-lvms spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-remove-lvms-crds spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: logicalvolumes.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmclusters.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroupnodestatuses.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroups.lvm.topolvm.io remediationAction: enforce severity: high", "oc create -f <policy> -ns <namespace>", "oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.16 --dest-dir=<directory_name>", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11s", "oc describe pvc <pvc_name> 1", "Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io \"lvms-vg1\" not found", "oc get lvmcluster -n openshift-storage", "NAME AGE my-lvmcluster 65m", "oc get pods -n openshift-storage", "NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66m", "oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage", "oc get pods -n openshift-storage", "NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m topolvm-node-54as8 4/4 Running 0 66m topolvm-node-78fft 4/4 Running 17 (8s ago) 66m vg-manager-r6zdv 1/1 Running 0 66m vg-manager-990ut 1/1 Running 0 66m vg-manager-an118 1/1 Running 0 66m", "oc describe pvc <pvc_name> 1", "oc project openshift-storage", "oc get logicalvolume", "oc delete logicalvolume <name> 1", "oc patch logicalvolume <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1", "oc get lvmvolumegroup", "oc delete lvmvolumegroup <name> 1", "oc patch lvmvolumegroup <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1", "oc delete lvmvolumegroupnodestatus --all", "oc delete lvmcluster --all", "oc patch lvmcluster <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1", "oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: <provisioner-name> 2 parameters: EOF", "oc new-app mysql-persistent", "--> Deploying template \"openshift/mysql-persistent\" to project default", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s", "kind: CSIDriver metadata: name: csi.mydriver.company.org labels: security.openshift.io/csi-ephemeral-volume-profile: restricted 1", "kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: \"/data\" name: my-csi-inline-vol command: [ \"sleep\", \"1000000\" ] volumes: 1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar", "oc create -f my-csi-app.yaml", "oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedSecret metadata: name: my-share spec: secretRef: name: <name of secret> namespace: <namespace of secret> EOF", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF", "oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder", "oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedSecret: my-share EOF", "oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedConfigMap metadata: name: my-share spec: configMapRef: name: <name of configmap> namespace: <namespace of configmap> EOF", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedconfigmaps resourceNames: - my-share verbs: - use EOF", "create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder", "oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedConfigMap: my-share EOF", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io 1 deletionPolicy: Delete", "oc create -f volumesnapshotclass.yaml", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap 1 source: persistentVolumeClaimName: myclaim 2", "oc create -f volumesnapshot-dynamic.yaml", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent 1", "oc create -f volumesnapshot-manual.yaml", "oc describe volumesnapshot mysnap", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1 creationTime: \"2020-01-29T12:24:30Z\" 2 readyToUse: true 3 restoreSize: 500Mi", "oc get volumesnapshotcontent", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete 1", "oc delete volumesnapshot <volumesnapshot_name>", "volumesnapshot.snapshot.storage.k8s.io \"mysnapshot\" deleted", "oc delete volumesnapshotcontent <volumesnapshotcontent_name>", "oc patch -n USDPROJECT volumesnapshot/USDNAME --type=merge -p '{\"metadata\": {\"finalizers\":null}}'", "volumesnapshotclass.snapshot.storage.k8s.io \"csi-ocs-rbd-snapclass\" deleted", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap 1 kind: VolumeSnapshot 2 apiGroup: snapshot.storage.k8s.io 3 accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "oc create -f pvc-restore.yaml", "oc get pvc", "oc -n openshift-cluster-csi-drivers get cm/vsphere-csi-config -o yaml", "apiVersion: v1 data: cloud.conf: |+ # Labels with topology values are added dynamically via operator [Global] cluster-id = vsphere-01-cwv8p [VirtualCenter \"vcenter.openshift.com\"] insecure-flag = true datacenters = DEVQEdatacenter migration-datastore-url = ds:///vmfs/volumes/vsan:527320283a8c3163-2faa6dc5949a3a28/ kind: ConfigMap metadata: creationTimestamp: \"2024-03-06T09:46:40Z\" name: vsphere-csi-config namespace: openshift-cluster-csi-drivers resourceVersion: \"126687\"", "oc patch clustercsidriver/csi.vsphere.vmware.com --type=merge -p '{\"spec\":{\"driverConfig\":{\"vSphere\":{\"globalMaxSnapshotsPerBlockVolume\": 10}}}}' clustercsidriver.operator.openshift.io/csi.vsphere.vmware.com patched", "oc patch clustercsidriver/csi.vsphere.vmware.com --type=merge -p '{\"spec\":{\"driverConfig\":{\"vSphere\":{\"granularMaxSnapshotsPerBlockVolumeInVVOL\": 5}}}}' clustercsidriver.operator.openshift.io/csi.vsphere.vmware.com patched", "oc patch clustercsidriver/csi.vsphere.vmware.com --type=merge -p '{\"spec\":{\"driverConfig\":{\"vSphere\":{\"granularMaxSnapshotsPerBlockVolumeInVSAN\": 7}}}}' clustercsidriver.operator.openshift.io/csi.vsphere.vmware.com patched", "oc -n openshift-cluster-csi-drivers get cm/vsphere-csi-config -o yaml", "apiVersion: v1 data: cloud.conf: |+ # Labels with topology values are added dynamically via operator [Global] cluster-id = vsphere-01-cwv8p [VirtualCenter \"vcenter.openshift.com\"] insecure-flag = true datacenters = DEVQEdatacenter migration-datastore-url = ds:///vmfs/volumes/vsan:527320283a8c3163-2faa6dc5949a3a28/ [Snapshot] global-max-snapshots-per-block-volume = 10 1 kind: ConfigMap metadata: creationTimestamp: \"2024-03-06T09:46:40Z\" name: vsphere-csi-config namespace: openshift-cluster-csi-drivers resourceVersion: \"127118\" uid: f6968303-81d8-4048-99c1-d8211363d0fa", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning 1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1", "oc create -f pvc-clone.yaml", "oc get pvc pvc-1-clone", "kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone 1", "spec: driverConfig: driverType: '' logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal storageClassState: Unmanaged 1", "patch clustercsidriver USDDRIVERNAME --type=merge -p \"{\\\"spec\\\":{\\\"storageClassState\\\":\\\"USD{STATE}\\\"}}\" 1", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: openshift-aws-efs-csi-driver namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - elasticfilesystem:* effect: Allow resource: '*' secretRef: name: aws-efs-cloud-credentials namespace: openshift-cluster-csi-drivers serviceAccountNames: - aws-efs-csi-driver-operator - aws-efs-csi-driver-controller-sa", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com", "2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created 2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml 2022/03/21 06:24:45 Updated Role policy for Role my-aws-efs-openshift-cluster-csi-drivers-aws-efs-cloud-", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: \"700\" 3 gidRangeStart: \"1000\" 4 gidRangeEnd: \"2000\" 5 basePath: \"/dynamic_provisioning\" 6", "Trust relationships trusted entity trusted account A configuration on my-efs-acrossaccount-role in account B { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::301721915996:root\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": {} } ] } my-cross-account-assume-policy policy attached to my-efs-acrossaccount-role in account B { \"Version\": \"2012-10-17\", \"Statement\": { \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::589722580343:role/my-efs-acrossaccount-role\" } } my-efs-acrossaccount-driver-policy attached to my-efs-acrossaccount-role in account B { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeNetworkInterfaces\", \"ec2:DescribeSubnets\" ], \"Resource\": \"*\" }, { \"Sid\": \"VisualEditor1\", \"Effect\": \"Allow\", \"Action\": [ \"elasticfilesystem:DescribeMountTargets\", \"elasticfilesystem:DeleteAccessPoint\", \"elasticfilesystem:ClientMount\", \"elasticfilesystem:DescribeAccessPoints\", \"elasticfilesystem:ClientWrite\", \"elasticfilesystem:ClientRootAccess\", \"elasticfilesystem:DescribeFileSystems\", \"elasticfilesystem:CreateAccessPoint\" ], \"Resource\": [ \"arn:aws:elasticfilesystem:*:589722580343:access-point/*\", \"arn:aws:elasticfilesystem:*:589722580343:file-system/*\" ] } ] }", "my-cross-account-assume-policy policy attached to Openshift cluster efs csi driver user in account A { \"Version\": \"2012-10-17\", \"Statement\": { \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::589722580343:role/my-efs-acrossaccount-role\" } }", "oc -n openshift-cluster-csi-drivers create secret generic my-efs-cross-account --from-literal=awsRoleArn='arn:aws:iam::589722580343:role/my-efs-acrossaccount-role'", "oc -n openshift-cluster-csi-drivers create role access-secrets --verb=get,list,watch --resource=secrets oc -n openshift-cluster-csi-drivers create rolebinding --role=access-secrets default-to-secrets --serviceaccount=openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa", "This step is not mandatory, but can be safer for AWS EFS volume usage.", "EFS volume filesystem policy in account B { \"Version\": \"2012-10-17\", \"Id\": \"efs-policy-wizard-8089bf4a-9787-40f0-958e-bc2363012ace\", \"Statement\": [ { \"Sid\": \"efs-statement-bd285549-cfa2-4f8b-861e-c372399fd238\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"*\" }, \"Action\": [ \"elasticfilesystem:ClientRootAccess\", \"elasticfilesystem:ClientWrite\", \"elasticfilesystem:ClientMount\" ], \"Resource\": \"arn:aws:elasticfilesystem:us-east-2:589722580343:file-system/fs-091066a9bf9becbd5\", \"Condition\": { \"Bool\": { \"elasticfilesystem:AccessedViaMountTarget\": \"true\" } } }, { \"Sid\": \"efs-statement-03646e39-d80f-4daf-b396-281be1e43bab\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::589722580343:role/my-efs-acrossaccount-role\" }, \"Action\": [ \"elasticfilesystem:ClientRootAccess\", \"elasticfilesystem:ClientWrite\", \"elasticfilesystem:ClientMount\" ], \"Resource\": \"arn:aws:elasticfilesystem:us-east-2:589722580343:file-system/fs-091066a9bf9becbd5\" } ] }", "The cross account efs volume storageClass kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-cross-account-mount-sc provisioner: efs.csi.aws.com mountOptions: - tls parameters: provisioningMode: efs-ap fileSystemId: fs-00f6c3ae6f06388bb directoryPerms: \"700\" gidRangeStart: \"1000\" gidRangeEnd: \"2000\" basePath: \"/account-a-data\" csi.storage.k8s.io/provisioner-secret-name: my-efs-cross-account csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers volumeBindingMode: Immediate", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi", "apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: \"false\" 3", "oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created", "oc get clustercsidriver efs.csi.aws.com -o yaml", "oc describe pod Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume \"pvc-d7c097e6-67ec-4fae-b968-7e7056796449\" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition", "oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 provisioner: disk.csi.azure.com parameters: skuName: <storage-class-account-type> 2 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true EOF", "oc get storageclass", "oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE azurefile-csi file.csi.azure.com Delete Immediate true 68m managed-csi (default) disk.csi.azure.com Delete WaitForFirstConsumer true 68m sc-prem-zrs disk.csi.azure.com Delete WaitForFirstConsumer true 4m25s 1", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2", "oc create -f <machine-set-name>.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: \"2000\" 2 diskMbpsReadWrite: \"320\" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3", "apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - \"sleep\" - \"infinity\" volumeMounts: - mountPath: \"/mnt/azure\" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: file.csi.azure.com 2 parameters: protocol: nfs 3 skuName: Premium_LRS # available values: Premium_LRS, Premium_ZRS mountOptions: - nconnect=4", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: \"WaitForFirstConsumer\" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1", "oc describe storageclass csi-gce-pd-cmek", "Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi", "oc apply -f pvc.yaml", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s", "gcloud services enable file.googleapis.com --project <my_gce_project> 1", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: filestore.csi.storage.gke.io spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: filestore-csi provisioner: filestore.csi.storage.gke.io parameters: connect-mode: DIRECT_PEERING 1 network: network-name 2 allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "oc -n openshift-machine-api get machinesets -o yaml | grep \"network:\" - network: gcp-filestore-network (...)", "oc get pvc -o json -A | jq -r '.items[] | select(.spec.storageClassName == \"filestore-csi\")", "oc delete <pvc-name> 1", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46h", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass standard-csi -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "oc create -f cinder-claim.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: 1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold 2", "oc create -f pvc-manila.yaml", "oc get pvc pvc-manila", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: smb.csi.k8s.io spec: managementState: Managed", "oc create -f <file_name>.yaml", "apiVersion: v1 kind: Secret metadata: name: smbcreds 1 namespace: samba-server 2 stringData: username: <username> 3 password: <password> 4", "oc create -f <sc_file_name>.yaml 1", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <sc_name> 1 provisioner: smb.csi.k8s.io parameters: source: //<hostname>/<shares> 2 csi.storage.k8s.io/provisioner-secret-name: smbcreds 3 csi.storage.k8s.io/provisioner-secret-namespace: samba-server 4 csi.storage.k8s.io/node-stage-secret-name: smbcreds 5 csi.storage.k8s.io/node-stage-secret-namespace: samba-server 6 reclaimPolicy: Delete volumeBindingMode: Immediate mountOptions: - dir_mode=0777 - file_mode=0777 - uid=1001 - gid=1001", "oc create -f <pv_file_name>.yaml 1", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc_name> 1 spec: accessModes: - ReadWriteMany resources: requests: storage: <storage_amount> 2 storageClassName: <sc_name> 3", "oc describe pvc <pvc_name> 1", "Name: pvc-test Namespace: default StorageClass: samba Status: Bound 1", "oc create -f <file_name>.yaml", "apiVersion: v1 kind: Secret metadata: name: smbcreds 1 namespace: samba-server 2 stringData: username: <username> 3 password: <password> 4", "oc create -f <pv_file_name>.yaml 1", "apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: smb.csi.k8s.io name: <pv_name> 1 spec: capacity: storage: 100Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: \"\" mountOptions: - dir_mode=0777 - file_mode=0777 csi: driver: smb.csi.k8s.io volumeHandle: smb-server.default.svc.cluster.local/share# 2 volumeAttributes: source: //<hostname>/<shares> 3 nodeStageSecretRef: name: <secret_name_shares> 4 namespace: <namespace> 5", "oc create -f <pv_file_name>.yaml 1", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc_name> 1 spec: accessModes: - ReadWriteMany resources: requests: storage: <storage_amount> 2 storageClassName: \"\" volumeName: <pv_name> 3", "oc describe pvc <pvc_name> 1", "Name: pvc-test Namespace: default StorageClass: Status: Bound 1", "oc create -f <deployment_file_name>.yaml 1", "apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: <deployment_name> 1 spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx name: <deployment_name> 2 spec: nodeSelector: \"kubernetes.io/os\": linux containers: - name: <deployment_name> 3 image: quay.io/centos/centos:stream8 command: - \"/bin/bash\" - \"-c\" - set -euo pipefail; while true; do echo USD(date) >> <mount_path>/outfile; sleep 1; done 4 volumeMounts: - name: <vol_mount_name> 5 mountPath: <mount_path> 6 readOnly: false volumes: - name: <vol_mount_name> 7 persistentVolumeClaim: claimName: <pvc_name> 8 strategy: rollingUpdate: maxSurge: 0 maxUnavailable: 1 type: RollingUpdate", "oc exec -it <pod_name> -- df -h 1", "Filesystem Size Used Avail Use% Mounted on /dev/sda1 97G 21G 77G 22% /etc/hosts //20.43.191.64/share 97G 21G 77G 22% /mnt/smb", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: thin-csi provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: \"USDopenshift-storage-policy-xxxx\" volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: false reclaimPolicy: Delete", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: resources: requests: storage: 1Gi accessModes: - ReadWriteMany storageClassName: thin-csi", "~ USD oc delete CSIDriver csi.vsphere.vmware.com", "csidriver.storage.k8s.io \"csi.vsphere.vmware.com\" deleted", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: encryption provisioner: csi.vsphere.vmware.com parameters: storagePolicyName: <storage-policy-name> 1 datastoreurl: \"ds:///vmfs/volumes/vsan:522e875627d-b090c96b526bb79c/\"", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: csi-encrypted provisioner: csi.vsphere.vmware.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer parameters: storagePolicyName: <storage-policy-name> 1", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: zoned-sc 1 provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: zoned-storage-policy 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "~ USD oc edit clustercsidriver csi.vsphere.vmware.com -o yaml", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: csi.vsphere.vmware.com spec: logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal unsupportedConfigOverrides: null driverConfig: driverType: vSphere 1 vSphere: topologyCategories: 2 - openshift-zone - openshift-region", "~ USD oc get csinode", "NAME DRIVERS AGE co8-4s88d-infra-2m5vd 1 27m co8-4s88d-master-0 1 70m co8-4s88d-master-1 1 70m co8-4s88d-master-2 1 70m co8-4s88d-worker-j2hmg 1 47m co8-4s88d-worker-mbb46 1 47m co8-4s88d-worker-zlk7d 1 47m", "~ USD oc get csinode co8-4s88d-worker-j2hmg -o yaml", "spec: drivers: - allocatable: count: 59 name: csi-vsphere.vmware.com nodeID: co8-4s88d-worker-j2hmg topologyKeys: 1 - topology.csi.vmware.com/openshift-zone - topology.csi.vmware.com/openshift-region", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: zoned-sc 1 provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: zoned-storage-policy 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "~ USD oc get pv <pv-name> -o yaml", "nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.csi.vmware.com/openshift-zone 1 operator: In values: - <openshift-zone> -key: topology.csi.vmware.com/openshift-region 2 operator: In values: - <openshift-region> peristentVolumeclaimPolicy: Delete storageClassName: <zoned-storage-class-name> 3 volumeMode: Filesystem", "oc -n openshift-cluster-csi-drivers get cm/vsphere-csi-config -o yaml", "apiVersion: v1 data: cloud.conf: |+ # Labels with topology values are added dynamically via operator [Global] cluster-id = vsphere-01-cwv8p [VirtualCenter \"vcenter.openshift.com\"] insecure-flag = true datacenters = DEVQEdatacenter migration-datastore-url = ds:///vmfs/volumes/vsan:527320283a8c3163-2faa6dc5949a3a28/ kind: ConfigMap metadata: creationTimestamp: \"2024-03-06T09:46:40Z\" name: vsphere-csi-config namespace: openshift-cluster-csi-drivers resourceVersion: \"126687\"", "oc patch clustercsidriver/csi.vsphere.vmware.com --type=merge -p '{\"spec\":{\"driverConfig\":{\"vSphere\":{\"globalMaxSnapshotsPerBlockVolume\": 10}}}}' clustercsidriver.operator.openshift.io/csi.vsphere.vmware.com patched", "oc patch clustercsidriver/csi.vsphere.vmware.com --type=merge -p '{\"spec\":{\"driverConfig\":{\"vSphere\":{\"granularMaxSnapshotsPerBlockVolumeInVVOL\": 5}}}}' clustercsidriver.operator.openshift.io/csi.vsphere.vmware.com patched", "oc patch clustercsidriver/csi.vsphere.vmware.com --type=merge -p '{\"spec\":{\"driverConfig\":{\"vSphere\":{\"granularMaxSnapshotsPerBlockVolumeInVSAN\": 7}}}}' clustercsidriver.operator.openshift.io/csi.vsphere.vmware.com patched", "oc -n openshift-cluster-csi-drivers get cm/vsphere-csi-config -o yaml", "apiVersion: v1 data: cloud.conf: |+ # Labels with topology values are added dynamically via operator [Global] cluster-id = vsphere-01-cwv8p [VirtualCenter \"vcenter.openshift.com\"] insecure-flag = true datacenters = DEVQEdatacenter migration-datastore-url = ds:///vmfs/volumes/vsan:527320283a8c3163-2faa6dc5949a3a28/ [Snapshot] global-max-snapshots-per-block-volume = 10 1 kind: ConfigMap metadata: creationTimestamp: \"2024-03-06T09:46:40Z\" name: vsphere-csi-config namespace: openshift-cluster-csi-drivers resourceVersion: \"127118\" uid: f6968303-81d8-4048-99c1-d8211363d0fa", "kind: Pod apiVersion: v1 metadata: name: my-app spec: containers: - name: my-frontend image: busybox:1.28 volumeMounts: - mountPath: \"/mnt/storage\" name: data command: [ \"sleep\", \"1000000\" ] volumes: - name: data 1 ephemeral: volumeClaimTemplate: metadata: labels: type: my-app-ephvol spec: accessModes: [ \"ReadWriteOnce\" ] storageClassName: \"gp2-csi\" resources: requests: storage: 1Gi", "oc edit storageclass <storage_class_name> 1", "apiVersion: storage.k8s.io/v1 kind: StorageClass parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: \"storageClassWithFlagSet\" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1", "oc describe pvc <pvc_name>", "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']", "oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: csi.vsphere.vmware.com 2", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs", "get node <node name> 1", "adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute 1", "spec: taints: - effect: NoExecute key: node.kubernetes.io/out-of-service value: nodeshutdown", "adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute- 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/storage/index
Chapter 3. Installing RHEL AI on AWS
Chapter 3. Installing RHEL AI on AWS To install and deploy Red Hat Enterprise Linux AI on AWS, you must first convert the RHEL AI image into an Amazon Machine Image (AMI). In this process, you create the following resources: An S3 bucket with the RHEL AI image AWS EC2 snapshots An AWS AMI An AWS instance 3.1. Converting the RHEL AI image to an AWS AMI Before deploying RHEL AI on an AWS machine, you must set up a S3 bucket and convert the RHEL AI image to a AWS AMI. Prerequisites You have an Access Key ID configured in the AWS IAM account manager . Procedure Install the AWS command-line tool by following the AWS documentation You need to create a S3 bucket and set the permissions to allow image file conversion to AWS snapshots. Create the necessary environment variables by running the following commands: USD export BUCKET=<custom_bucket_name> USD export RAW_AMI=nvidia-bootc.ami USD export AMI_NAME="rhel-ai" USD export DEFAULT_VOLUME_SIZE=1000 Note On AWS, the DEFAULT_VOLUME_SIZE is measured GBs. You can create an S3 bucket by running the following command: USD aws s3 mb s3://USDBUCKET You must create a trust-policy.json file with the necessary configurations for generating a S3 role for your bucket: USD printf '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vmie.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals":{ "sts:Externalid": "vmimport" } } } ] }' > trust-policy.json Create an S3 role for your bucket that you can name. In the following example command, vmiport is the name of the role. USD aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json You must create a role-policy.json file with the necessary configurations for generating a policy for your bucket: USD printf '{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket" ], "Resource":[ "arn:aws:s3:::%s", "arn:aws:s3:::%s/*" ] }, { "Effect":"Allow", "Action":[ "ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe*" ], "Resource":"*" } ] }' USDBUCKET USDBUCKET > role-policy.json Create a policy for your bucket by running the following command: USD aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json Now that your S3 bucket is set up, you need to download the RAW image from Red Hat Enterprise Linux AI download page Copy the RAW image link and add it to the following command: USD curl -Lo disk.raw <link-to-raw-file> Upload the image to the S3 bucket with the following command: USD aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI Convert the image to a snapshot and store it in the task_id variable name by running the following commands: USD printf '{ "Description": "my-image", "Format": "raw", "UserBucket": { "S3Bucket": "%s", "S3Key": "%s" } }' USDBUCKET USDRAW_AMI > containers.json USD task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId) You can check the progress of the disk image to snapshot conversion job with the following command: USD aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active Once the conversion job is complete, you can get the snapshot ID and store it in a variable called snapshot_id by running the following command: USD snapshot_id=USD(aws ec2 describe-import-snapshot-tasks | jq -r '.ImportSnapshotTasks[] | select(.ImportTaskId=="'USD{task_id}'") | .SnapshotTaskDetail.SnapshotId') Add a tag name to the snapshot, so it's easier to identify, by running the following command: USD aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value="USDAMI_NAME" Register an AMI from the snapshot with the following command: USD ami_id=USD(aws ec2 register-image \ --name "USDAMI_NAME" \ --description "USDAMI_NAME" \ --architecture x86_64 \ --root-device-name /dev/sda1 \ --block-device-mappings "DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}" \ --virtualization-type hvm \ --ena-support \ | jq -r .ImageId) You can add another tag name to identify the AMI by running the following command: USD aws ec2 create-tags --resources USDami_id --tags Key=Name,Value="USDAMI_NAME" 3.2. Deploying your instance on AWS using the CLI You can launch the AWS instance with your new RHEL AI AMI from the AWS web console or the CLI. You can use whichever method of deployment you want to launch your instance. The following procedure displays how you can use the CLI to launch your AWS instance with the custom AMI. If you choose to use the CLI as a deployment option, there are several configurations you have to create, as shown in "Prerequisites". Prerequisites You created your RHEL AI AMI. For more information, see "Converting the RHEL AI image to an AWS AMI". You have the AWS command-line tool installed and is properly configured with your aws_access_key_id and aws_secret_access_key. You configured your Virtual Private Cloud (VPC). You created a subnet for your instance. You created a SSH key-pair. You created a security group on AWS. Procedure For various parameters, you need to gather the ID of the variable. To access the image ID, run the following command: USD aws ec2 describe-images --owners self To access the security group ID, run the following command: USD aws ec2 describe-security-groups To access the subnet ID, run the following command: USD aws ec2 describe-subnets Populate environment variables for when you create the instance USD instance_name=rhel-ai-instance USD ami=<ami-id> USD instance_type=<instance-type-size> USD key_name=<key-pair-name> USD security_group=<sg-id> USD disk_size=<size-of-disk> Create your instance using the variables by running the following command: USD aws ec2 run-instances \ --image-id USDami \ --instance-type USDinstance_type \ --key-name USDkey_name \ --security-group-ids USDsecurity_group \ --subnet-id USDsubnet \ --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' \ --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]' User account The default user account in the RHEL AI AMI is cloud-user . It has all permissions via sudo without password. Verification To verify that your Red Hat Enterprise Linux AI tools are installed correctly, you need to run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/cloud--user/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls. taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train
[ "export BUCKET=<custom_bucket_name> export RAW_AMI=nvidia-bootc.ami export AMI_NAME=\"rhel-ai\" export DEFAULT_VOLUME_SIZE=1000", "aws s3 mb s3://USDBUCKET", "printf '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }' > trust-policy.json", "aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json", "printf '{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }' USDBUCKET USDBUCKET > role-policy.json", "aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json", "curl -Lo disk.raw <link-to-raw-file>", "aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI", "printf '{ \"Description\": \"my-image\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"%s\", \"S3Key\": \"%s\" } }' USDBUCKET USDRAW_AMI > containers.json", "task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId)", "aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active", "snapshot_id=USD(aws ec2 describe-import-snapshot-tasks | jq -r '.ImportSnapshotTasks[] | select(.ImportTaskId==\"'USD{task_id}'\") | .SnapshotTaskDetail.SnapshotId')", "aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value=\"USDAMI_NAME\"", "ami_id=USD(aws ec2 register-image --name \"USDAMI_NAME\" --description \"USDAMI_NAME\" --architecture x86_64 --root-device-name /dev/sda1 --block-device-mappings \"DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}\" --virtualization-type hvm --ena-support | jq -r .ImageId)", "aws ec2 create-tags --resources USDami_id --tags Key=Name,Value=\"USDAMI_NAME\"", "aws ec2 describe-images --owners self", "aws ec2 describe-security-groups", "aws ec2 describe-subnets", "instance_name=rhel-ai-instance ami=<ami-id> instance_type=<instance-type-size> key_name=<key-pair-name> security_group=<sg-id> disk_size=<size-of-disk>", "aws ec2 run-instances --image-id USDami --instance-type USDinstance_type --key-name USDkey_name --security-group-ids USDsecurity_group --subnet-id USDsubnet --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]'", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/cloud--user/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls. taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/installing/installing_on_aws
2.12. Saving the File
2.12. Saving the File To review the contents of the kickstart file after you have finished choosing your kickstart options, select File => Preview from the pull-down menu. Figure 2.17. Preview To save the kickstart file, click the Save to File button in the preview window. To save the file without previewing it, select File => Save File or press Ctrl + S . A dialog box appears. Select where to save the file. After saving the file, refer to Section 1.10, "Starting a Kickstart Installation" for information on how to start the kickstart installation.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/rhkstool-saving_the_file
Chapter 58. JmxTransOutputDefinitionTemplate schema reference
Chapter 58. JmxTransOutputDefinitionTemplate schema reference Used in: JmxTransSpec Property Description outputType Template for setting the format of the data that will be pushed.For more information see JmxTrans OutputWriters . string host The DNS/hostname of the remote host that the data is pushed to. string port The port of the remote host that the data is pushed to. integer flushDelayInSeconds How many seconds the JmxTrans waits before pushing a new set of data out. integer typeNames Template for filtering data to be included in response to a wildcard query. For more information see JmxTrans queries . string array name Template for setting the name of the output definition. This is used to identify where to send the results of queries should be sent. string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-jmxtransoutputdefinitiontemplate-reference
Release notes for Red Hat build of OpenJDK 17.0.13
Release notes for Red Hat build of OpenJDK 17.0.13 Red Hat build of OpenJDK 17 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.13/index
Chapter 1. Red Hat build of OpenJDK 21 overview
Chapter 1. Red Hat build of OpenJDK 21 overview OpenJDK (Open Java Development Kit) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/installing_and_using_red_hat_build_of_openjdk_21_on_rhel/openjdk21-overview_openjdk
Chapter 4. Ansible IPMI modules in RHEL
Chapter 4. Ansible IPMI modules in RHEL 4.1. The rhel_mgmt collection The Intelligent Platform Management Interface (IPMI) is a specification for a set of standard protocols to communicate with baseboard management controller (BMC) devices. The IPMI modules allow you to enable and support hardware management automation. The IPMI modules are available in: The rhel_mgmt Collection. The package name is ansible-collection-redhat-rhel_mgmt . The RHEL 8 AppStream, as part of the new ansible-collection-redhat-rhel_mgmt package. The following IPMI modules are available in the rhel_mgmt collection: ipmi_boot : Management of boot device order ipmi_power : Power management for machine The mandatory parameters used for the IPMI Modules are: ipmi_boot parameters: Module name Description name Hostname or ip address of the BMC password Password to connect to the BMC bootdev Device to be used on boot * network * floppy * hd * safe * optical * setup * default User Username to connect to the BMC ipmi_power parameters: Module name Description name BMC Hostname or IP address password Password to connect to the BMC user Username to connect to the BMC State Check if the machine is on the desired status * on * off * shutdown * reset * boot 4.2. Using the ipmi_boot module The following example shows how to use the ipmi_boot module in a playbook to set a boot device for the boot. For simplicity, the examples use the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The ansible-collection-redhat-rhel_mgmt package is installed. The python3-pyghmi package is installed either on the control node or the managed nodes. The IPMI BMC that you want to control is accessible over network from the control node or the managed host (if not using localhost as the managed host). Note that the host whose BMC is being configured by the module is generally different from the managed host, as the module contacts the BMC over the network using the IPMI protocol. You have credentials to access BMC with an appropriate level of access. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Set boot device to be used on boot hosts: managed-node-01.example.com tasks: - name: Ensure boot device is HD redhat.rhel_mgmt.ipmi_boot: user: <admin_user> password: <password> bootdev: hd Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification When you run the playbook, Ansible returns success . Additional resources /usr/share/ansible/collections/ansible_collections/redhat/rhel_mgmt/README.md file 4.3. Using the ipmi_power module This example shows how to use the ipmi_boot module in a playbook to check if the system is turned on. For simplicity, the examples use the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The ansible-collection-redhat-rhel_mgmt package is installed. The python3-pyghmi package is installed either on the control node or the managed nodes. The IPMI BMC that you want to control is accessible over network from the control node or the managed host (if not using localhost as the managed host). Note that the host whose BMC is being configured by the module is generally different from the managed host, as the module contacts the BMC over the network using the IPMI protocol. You have credentials to access BMC with an appropriate level of access. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Power management hosts: managed-node-01.example.com tasks: - name: Ensure machine is powered on redhat.rhel_mgmt.ipmi_power: user: <admin_user> password: <password> state: on Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification When you run the playbook, Ansible returns true . Additional resources /usr/share/ansible/collections/ansible_collections/redhat/rhel_mgmt/README.md file
[ "--- - name: Set boot device to be used on next boot hosts: managed-node-01.example.com tasks: - name: Ensure boot device is HD redhat.rhel_mgmt.ipmi_boot: user: <admin_user> password: <password> bootdev: hd", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Power management hosts: managed-node-01.example.com tasks: - name: Ensure machine is powered on redhat.rhel_mgmt.ipmi_power: user: <admin_user> password: <password> state: on", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/assembly_ansible-ipmi-modules-in-rhel_automating-system-administration-by-using-rhel-system-roles
Chapter 8. Using RBAC to define and apply permissions
Chapter 8. Using RBAC to define and apply permissions 8.1. RBAC overview Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project. Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects. Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action. Authorization is managed using: Authorization object Description Rules Sets of permitted verbs on a set of objects. For example, whether a user or service account can create pods. Roles Collections of rules. You can associate, or bind, users and groups to multiple roles. Bindings Associations between users and/or groups with a role. There are two levels of RBAC roles and bindings that control authorization: RBAC level Description Cluster RBAC Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. Local RBAC Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation. This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles. During evaluation, both the cluster role bindings and the local role bindings are used. For example: Cluster-wide "allow" rules are checked. Locally-bound "allow" rules are checked. Deny by default. 8.1.1. Default cluster roles OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally. Important It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly. Default cluster role Description admin A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota. basic-user A user that can get basic information about projects and users. cluster-admin A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. cluster-status A user that can get basic cluster status information. cluster-reader A user that can get or view most of the objects but cannot modify them. edit A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. self-provisioner A user that can create their own projects. view A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin , plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin . The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below. Warning The get pods/exec , get pods/* , and get * rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges . 8.1.2. Evaluating authorization OpenShift Container Platform evaluates authorization by using: Identity The user name and list of groups that the user belongs to. Action The action you perform. In most cases, this consists of: Project : The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities. Verb : The action itself: get , list , create , update , delete , deletecollection , or watch . Resource name : The API endpoint that you access. Bindings The full list of bindings, the associations between users or groups with a role. OpenShift Container Platform evaluates authorization by using the following steps: The identity and the project-scoped action is used to find all bindings that apply to the user or their groups. Bindings are used to locate all the roles that apply. Roles are used to find all the rules that apply. The action is checked against each rule to find a match. If no matching rule is found, the action is then denied by default. Tip Remember that users and groups can be associated with, or bound to, multiple roles at the same time. Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with. Important The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin . Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level. 8.1.2.1. Cluster role aggregation The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation , where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources. 8.2. Projects and namespaces A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces. Namespaces provide a unique scope for: Named resources to avoid basic naming collisions. Delegated management authority to trusted users. The ability to limit community resource consumption. Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users. A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects. Projects can have a separate name , displayName , and description . The mandatory name is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters. The optional displayName is how the project is displayed in the web console (defaults to name ). The optional description can be a more detailed description of the project and is also visible in the web console. Each project scopes its own set of: Object Description Objects Pods, services, replication controllers, etc. Policies Rules for which users can or cannot perform actions on objects. Constraints Quotas for each kind of object that can be limited. Service accounts Service accounts act automatically with designated access to objects in the project. Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects. Developers and administrators can interact with projects by using the CLI or the web console. 8.3. Default projects OpenShift Container Platform comes with a number of default projects, and projects starting with openshift- are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 8.4. Viewing cluster roles and bindings You can use the oc CLI to view cluster roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the cluster roles and bindings. Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings. Procedure To view the cluster roles and their associated rule sets: USD oc describe clusterrole.rbac Example output Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*] ... To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles: USD oc describe clusterrolebinding.rbac Example output Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api ... 8.5. Viewing local roles and bindings You can use the oc CLI to view local roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the local roles and bindings: Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings. Users with the admin default cluster role bound locally can view and manage roles and bindings in that project. Procedure To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project: USD oc describe rolebinding.rbac To view the local role bindings for a different project, add the -n flag to the command: USD oc describe rolebinding.rbac -n joe-project Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project 8.6. Adding roles to users You can use the oc adm administrator CLI to manage the roles and bindings. Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands. You can bind any of the default cluster roles to local users or groups in your project. Procedure Add a role to a user in a specific project: USD oc adm policy add-role-to-user <role> <user> -n <project> For example, you can add the admin role to the alice user in joe project by running: USD oc adm policy add-role-to-user admin alice -n joe Tip You can alternatively apply the following YAML to add the role to the user: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice View the local role bindings and verify the addition in the output: USD oc describe rolebinding.rbac -n <project> For example, to view the local role bindings for the joe project: USD oc describe rolebinding.rbac -n joe Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe 1 The alice user has been added to the admins RoleBinding . 8.7. Creating a local role You can create a local role for a project and then bind it to a user. Procedure To create a local role for a project, run the following command: USD oc create role <name> --verb=<verb> --resource=<resource> -n <project> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to <project> , the project name For example, to create a local role that allows a user to view pods in the blue project, run the following command: USD oc create role podview --verb=get --resource=pod -n blue To bind the new role to a user, run the following command: USD oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue 8.8. Creating a cluster role You can create a cluster role. Procedure To create a cluster role, run the following command: USD oc create clusterrole <name> --verb=<verb> --resource=<resource> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to For example, to create a cluster role that allows a user to view pods, run the following command: USD oc create clusterrole podviewonly --verb=get --resource=pod 8.9. Local role binding commands When you manage a user or group's associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used. You can use the following commands for local RBAC management. Table 8.1. Local role binding operations Command Description USD oc adm policy who-can <verb> <resource> Indicates which users can perform an action on a resource. USD oc adm policy add-role-to-user <role> <username> Binds a specified role to specified users in the current project. USD oc adm policy remove-role-from-user <role> <username> Removes a given role from specified users in the current project. USD oc adm policy remove-user <username> Removes specified users and all of their roles in the current project. USD oc adm policy add-role-to-group <role> <groupname> Binds a given role to specified groups in the current project. USD oc adm policy remove-role-from-group <role> <groupname> Removes a given role from specified groups in the current project. USD oc adm policy remove-group <groupname> Removes specified groups and all of their roles in the current project. 8.10. Cluster role binding commands You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources. Table 8.2. Cluster role binding operations Command Description USD oc adm policy add-cluster-role-to-user <role> <username> Binds a given role to specified users for all projects in the cluster. USD oc adm policy remove-cluster-role-from-user <role> <username> Removes a given role from specified users for all projects in the cluster. USD oc adm policy add-cluster-role-to-group <role> <groupname> Binds a given role to specified groups for all projects in the cluster. USD oc adm policy remove-cluster-role-from-group <role> <groupname> Removes a given role from specified groups for all projects in the cluster. 8.11. Creating a cluster admin The cluster-admin role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources. Prerequisites You must have created a user to define as the cluster admin. Procedure Define the user as a cluster admin: USD oc adm policy add-cluster-role-to-user cluster-admin <user> 8.12. Cluster role bindings for unauthenticated groups Note Before OpenShift Container Platform 4.17, unauthenticated groups were allowed access to some cluster roles. Clusters updated from versions before OpenShift Container Platform 4.17 retain this access for unauthenticated groups. For security reasons OpenShift Container Platform 4.18 does not allow unauthenticated groups to have default access to cluster roles. There are use cases where it might be necessary to add system:unauthenticated to a cluster role. Cluster administrators can add unauthenticated users to the following cluster roles: system:scope-impersonation system:webhook system:oauth-token-deleter self-access-reviewer Important Always verify compliance with your organization's security standards when modifying unauthenticated access.
[ "oc describe clusterrole.rbac", "Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]", "oc describe clusterrolebinding.rbac", "Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api", "oc describe rolebinding.rbac", "oc describe rolebinding.rbac -n joe-project", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project", "oc adm policy add-role-to-user <role> <user> -n <project>", "oc adm policy add-role-to-user admin alice -n joe", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc describe rolebinding.rbac -n <project>", "oc describe rolebinding.rbac -n joe", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe", "oc create role <name> --verb=<verb> --resource=<resource> -n <project>", "oc create role podview --verb=get --resource=pod -n blue", "oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue", "oc create clusterrole <name> --verb=<verb> --resource=<resource>", "oc create clusterrole podviewonly --verb=get --resource=pod", "oc adm policy add-cluster-role-to-user cluster-admin <user>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authentication_and_authorization/using-rbac
20.4. Enabling Management Encryption
20.4. Enabling Management Encryption Red Hat recommends enabling both management and I/O encryption, but if you only want to use I/O encryption, you can skip this section and continue with Section 20.3.1, "Enabling I/O Encryption" . Prerequisites Enabling management encryption requires that storage servers are offline. Schedule an outage window for volumes, applications, clients, and other end users before beginning this process. Be aware that features such as snapshots and geo-replication may also be affected by this outage. Procedure 20.7. Enabling management encryption Prepare to enable encryption Unmount all volumes from all clients Run the following command on each client, for each volume mounted on that client. Stop NFS Ganesha or SMB services, if used Run the following command on any gluster server to disable NFS-Ganesha. Run the following command on any gluster server to stop SMB. Unmount shared storage, if used Run the following command on all servers to unmount shared storage. Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . Important Features that require shared storage, such as snapshots and geo-replication, may not work until after this process is complete. Stop all volumes Run the following command on any server to stop all volumes, including the shared storage volume. Stop gluster services on all servers For Red Hat Enterprise Linux 7 based installations: For Red Hat Enterprise Linux 6 based installations: Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Create and edit the secure-access file on all servers and clients Create a new /var/lib/glusterd/secure-access file. This file can be empty if you are using the default settings. Your Certificate Authority may require changes to the SSL certificate depth setting, transport.socket.ssl-cert-depth , in order to work correctly. To edit this setting, add the following line to the secure-access file, replacing n with the certificate depth required by your Certificate Authority. Clean up after configuring management encryption Start the glusterd service on all servers For Red Hat Enterprise Linux 7 based installations: For Red Hat Enterprise Linux 6 based installations: Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Start all volumes Run the following command on any host to start all volumes including shared storage. Mount shared storage, if used Run the following command on all servers to mount shared storage. Restart NFS Ganesha or SMB services, if used Run the following command on any gluster server to start NFS-Ganesha. Run the following command on any gluster server to start SMB. Mount volumes on clients The process for mounting a volume depends on the protocol your client is using. The following command mounts a volume using the native FUSE protocol.
[ "umount mount-point", "systemctl stop nfs-ganesha", "systemctl stop ctdb", "umount /var/run/gluster/shared_storage", "for vol in `gluster volume list`; do gluster --mode=script volume stop USDvol; sleep 2s; done", "systemctl stop glusterd pkill glusterfs", "service glusterd stop pkill glusterfs", "touch /var/lib/glusterd/secure-access", "echo \"option transport.socket.ssl-cert-depth n \" > /var/lib/glusterd/secure-access", "systemctl start glusterd", "service glusterd start", "for vol in `gluster volume list`; do gluster --mode=script volume start USDvol; sleep 2s; done", "mount -t glusterfs hostname :/gluster_shared_storage /run/gluster/shared_storage", "systemctl start nfs-ganesha", "systemctl start ctdb", "mount -t glusterfs server1:/testvolume /mnt/glusterfs" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/ch20s04
Chapter 4. Quarkus CXF overview
Chapter 4. Quarkus CXF overview This chapter provides information about Quarkus CXF extensions, CXF modules and CXF annotations supported by Quarkus CXF. 4.1. Quarkus CXF Here is the list of extensions produced by this project. Please follow the links under the extension names to learn about how to use them, about their configuration and about any known limitations. Quarkus CXF extension Support level Since Supported standards Quarkus CXF quarkus-cxf Stable 0.1.0 JAX-WS , JAXB , WS-Addressing , WS-Policy , MTOM Quarkus CXF Metrics Feature quarkus-cxf-rt-features-metrics Stable 0.14.0 Quarkus CXF OpenTelemetry quarkus-cxf-integration-tracing-opentelemetry Stable 2.7.0 Quarkus CXF WS-Security quarkus-cxf-rt-ws-security Stable 0.14.0 WS-Security , WS-SecurityPolicy Quarkus CXF WS-ReliableMessaging quarkus-cxf-rt-ws-rm Stable 1.5.3 WS-ReliableMessaging Quarkus CXF Security Token Service (STS) quarkus-cxf-services-sts Stable 1.5.3 WS-Trust Quarkus CXF HTTP Async Transport quarkus-cxf-rt-transports-http-hc5 Stable 1.1.0 Quarkus CXF XJC Plugins quarkus-cxf-xjc-plugins Stable 1.5.11 4.2. Supported CXF modules Here is a list of CXF modules supported by Quarkus CXF. You should typically not depend on these directly, but rather use some of the extensions listed above that brings the given CXF module as a transitive dependency. 4.2.1. Front ends Out of CXF front ends only the JAX-WS front end is fully supported by quarkus-cxf . The Simple front end may work in JVM mode, but it is not tested properly. We advise not to use it. 4.2.2. Data Bindings Out of CXF Data Bindings only the following ones are supported: JAXB MTOM Attachments with JAXB 4.2.3. Transports Out of CXF Transports only the following ones are supported: quarkus-cxf implements its own custom transport based on Quarkus and Vert.x for serving SOAP endpoints HTTP client via quarkus-cxf , including Basic Authentication Asynchronous Client HTTP Transport via quarkus-cxf-rt-transports-http-hc5 4.2.4. Tools wsdl2Java - see the Generate the Model classes from WSDL section of User guide java2ws - see the Generate WSDL from Java section of User guide 4.2.5. Supported SOAP Bindings All CXF WSDL Bindings are supported. In order to switch to SOAP 1.2 or to add MTOM, set quarkus.cxf.[client|endpoint]."name".soap-binding to one of the following values: Binding Property Value SOAP 1.1 (default) http://schemas.xmlsoap.org/wsdl/soap/http SOAP 1.2 http://www.w3.org/2003/05/soap/bindings/HTTP/ SOAP 1.1 with MTOM http://schemas.xmlsoap.org/wsdl/soap/http?mtom=true SOAP 1.2 with MTOM http://www.w3.org/2003/05/soap/bindings/HTTP/?mtom=true 4.3. Unsupported CXF modules Here is a list of CXF modules currently not supported by Quarkus CXF along with possible alternatives and/or reasons why the given module is not supported. CXF module Alternative JAX-RS cxf-rt-frontend-jaxrs cxf-rt-rs-client Use Quarkus RESTEasy Fediz Use Quarkus OpenID Connect Aegis Use JAXB and JAX-WS DOSGI Karaf JiBX Use JAXB and JAX-WS Local transport cxf-rt-transports-local Use HTTP transport JMS transport cxf-rt-transports-jms Use HTTP transport JBI cxf-rt-transports-jbi cxf-rt-bindings-jbi Deprecated in CXF use HTTP transport UDP transport cxf-rt-transports-udp Use HTTP transport Coloc transport Use HTTP transport WebSocket transport cxf-rt-transports-websocket Use HTTP transport Clustering cxf-rt-features-clustering Planned CORBA cxf-rt-bindings-corba Use JAX-WS SDO databinding cxf-rt-databinding-sdo XMLBeans Deprecated in CXF Javascript frontend Use JAX-WS JCA transport Use HTTP transport WS-Transfer runtime cxf-rt-ws-transfer Throttling cxf-rt-features-throttling Use load balancer 4.4. Supported CXF annotations Here is the status of CXF annotations on Quarkus. Unless stated otherwise, the support is available via io.quarkiverse.cxf:quarkus-cxf . Annotation Status @org.apache.cxf.feature.Features Supported @org.apache.cxf.interceptor.InInterceptors Supported @org.apache.cxf.interceptor.OutInterceptors Supported @org.apache.cxf.interceptor.OutFaultInterceptors Supported @org.apache.cxf.interceptor.InFaultInterceptors Supported @org.apache.cxf.annotations.WSDLDocumentation Supported @org.apache.cxf.annotations.WSDLDocumentationCollection Supported @org.apache.cxf.annotations.SchemaValidation Supported @org.apache.cxf.annotations.DataBinding Only the default value org.apache.cxf.jaxb.JAXBDataBinding is supported @org.apache.cxf.ext.logging.Logging Supported @org.apache.cxf.annotations.GZIP Supported @org.apache.cxf.annotations.FastInfoset Supported via com.sun.xml.fastinfoset:FastInfoset dependency @org.apache.cxf.annotations.EndpointProperty Supported @org.apache.cxf.annotations.EndpointProperties Supported @org.apache.cxf.annotations.Policy Supported @org.apache.cxf.annotations.Policies Supported @org.apache.cxf.annotations.UseAsyncMethod Supported
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_quarkus_reference/ass-camel-quarkus-cxf-overview
Chapter 6. Monitoring and Metrics
Chapter 6. Monitoring and Metrics Gluster Web Administration provides deep metrics and visualization of Gluster clusters, the physical server nodes and the storage elements (disks) through the Grafana open-source monitoring platform.
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/monitoring_guide/monitoring_and_metrics
16.6. Displaying Tiering Status Information (Deprecated)
16.6. Displaying Tiering Status Information (Deprecated) Warning Tiering is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support tiering in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. The status command displays the tiering volume information. # gluster volume tier VOLNAME status For example,
[ "gluster volume tier test-volume status Node Promoted files Demoted files Status --------- --------- --------- --------- localhost 1 5 in progress server1 0 2 in progress Tiering Migration Functionality: test-volume: success" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-managing_data_tiering-status
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/net/6.0/html/release_notes_for_.net_6.0_rpm_packages/making-open-source-more-inclusive
Chapter 8. Monitoring a high availability Red Hat Ceph Storage cluster
Chapter 8. Monitoring a high availability Red Hat Ceph Storage cluster When you deploy an overcloud with Red Hat Ceph Storage, Red Hat OpenStack Platform uses the ceph-mon monitor daemon to manage the Ceph cluster. Director deploys the daemon on all Controller nodes. 8.1. Checking Red Hat Ceph monitoring service status To check the status of the Red Hat Ceph Storage monitoring service, log in to a Controller node and run the service ceph status command. Procedure Log in to a Controller node and check that the Ceph Monitoring service is running: 8.2. Checking Red Hat Ceph monitoring configuration To check the configuration of the Red Hat Ceph Storage monitoring service, log in to a Controller node or a Red Hat Ceph node and open the /etc/ceph/ceph.conf file. Procedure Log in to a Controller nodes or on a Ceph node and open the /etc/ceph/ceph.conf file to view the monitoring configuration parameters: This example shows the following information: All three Controller nodes are configured to monitor the Red Hat Ceph Storage cluster with the mon_initial_members parameter. The 172.19.0.11/24 network is configured to provide a communication path between the Controller nodes and the Red Hat Ceph Storage nodes. The Red Hat Ceph Storage nodes are assigned to a separate network from the Controller nodes, and the IP addresses for the monitoring Controller nodes are 172.18.0.15 , 172.18.0.16 , and 172.18.0.17 . 8.3. Checking Red Hat Ceph node status To check the status of a specific Red Hat Ceph Storage node, log in to the node and run the ceph -s command. Procedure Log in to the Ceph node and run the ceph -s command: This example output shows that the health parameter value is HEALTH_OK , which indicates that the Ceph node is active and healthy. The output also shows three Ceph monitor services that are running on the three overcloud-controller nodes and the IP addresses and ports of the services. 8.4. Additional resources Red Hat Ceph product page
[ "sudo service ceph status === mon. overcloud-controller-0 === mon. overcloud-controller-0 : running {\"version\":\"0.94.1\"}", "[global] osd_pool_default_pgp_num = 128 osd_pool_default_min_size = 1 auth_service_required = cephx mon_initial_members = overcloud-controller-0 , overcloud-controller-1 , overcloud-controller-2 fsid = 8c835acc-6838-11e5-bb96-2cc260178a92 cluster_network = 172.19.0.11/24 auth_supported = cephx auth_cluster_required = cephx mon_host = 172.18.0.17,172.18.0.15,172.18.0.16 auth_client_required = cephx osd_pool_default_size = 3 osd_pool_default_pg_num = 128 public_network = 172.18.0.17/24", "ceph -s cluster 8c835acc-6838-11e5-bb96-2cc260178a92 health HEALTH_OK monmap e1: 3 mons at { overcloud-controller-0 =172.18.0.17:6789/0, overcloud-controller-1 =172.18.0.15:6789/0, overcloud-controller-2 =172.18.0.16:6789/0} election epoch 152, quorum 0,1,2 overcloud-controller-1 , overcloud-controller-2 , overcloud-controller-0 osdmap e543: 6 osds: 6 up, 6 in pgmap v1736: 256 pgs, 4 pools, 0 bytes data, 0 objects 267 MB used, 119 GB / 119 GB avail 256 active+clean" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/high_availability_deployment_and_usage/assembly_monitoring-ha-ceph-cluster_rhosp
Chapter 3. Installing power monitoring for Red Hat OpenShift
Chapter 3. Installing power monitoring for Red Hat OpenShift Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install power monitoring for Red Hat OpenShift by deploying the Power monitoring Operator in the OpenShift Container Platform web console. 3.1. Installing the Power monitoring Operator As a cluster administrator, you can install the Power monitoring Operator from OperatorHub by using the OpenShift Container Platform web console. Warning You must remove any previously installed versions of the Power monitoring Operator before installation. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. Procedure In the Administrator perspective of the web console, go to Operators OperatorHub . Search for power monitoring , click the Power monitoring for Red Hat OpenShift tile, and then click Install . Click Install again to install the Power monitoring Operator. Power monitoring for Red Hat OpenShift is now available in all namespaces of the OpenShift Container Platform cluster. Verification Verify that the Power monitoring Operator is listed in Operators Installed Operators . The Status should resolve to Succeeded . 3.2. Deploying Kepler You can deploy Kepler by creating an instance of the Kepler custom resource definition (CRD) by using the Power monitoring Operator. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. You have installed the Power monitoring Operator. Procedure In the Administrator perspective of the web console, go to Operators Installed Operators . Click Power monitoring for Red Hat OpenShift from the Installed Operators list and go to the Kepler tab. Click Create Kepler . On the Create Kepler page, ensure the Name is set to kepler . Important The name of your Kepler instance must be set to kepler . All other instances are ignored by the Power monitoring Operator. Click Create to deploy Kepler and power monitoring dashboards.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/power_monitoring/installing-power-monitoring
1.3. Clustering
1.3. Clustering Support for redundant ring for standalone Corosync, BZ# 722469 Red Hat Enterprise Linux 6.2 introduces support for redundant ring with autorecovery feature as a Technology Preview. Refer to Section 2.7, "Clustering" for a list of known issues associated with this Technology Preview. corosync-cpgtool, BZ# 688260 The corosync-cpgtool now specifies both interfaces in a dual ring configuration. This feature is a Technology Preview. Disabling rgmanager in /etc/cluster.conf, BZ# 723925 As a consequence of converting the /etc/cluster.conf configuration file to be used by pacemaker , rgmanager must be disabled. The risk of not doing this is high; after a successful conversion, it would be possible to start rgmanager and pacemaker on the same host, managing the same resources. Consequently, Red Hat Enterprise Linux 6.2 includes a feature (as a Technology Preview) that forces the following requirements: rgmanager must refuse to start if it sees the <rm disabled="1"> flag in /etc/cluster.conf . rgmanager must stop any resources and exit if the <rm disabled="1"> flag appears in /etc/cluster.conf during a reconfiguration. pacemaker, BZ# 456895 Pacemaker, a scalable high-availability cluster resource manager, is included in Red Hat Enterprise Linux 6 as a Technology Preview. Pacemaker is not fully integrated with the Red Hat cluster stack.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/clustering_tp
Using alt-java with Red Hat build of OpenJDK
Using alt-java with Red Hat build of OpenJDK Red Hat build of OpenJDK 8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_alt-java_with_red_hat_build_of_openjdk/index
Chapter 16. Set Up Isolation Levels
Chapter 16. Set Up Isolation Levels 16.1. About Isolation Levels Isolation levels determine when readers can view a concurrent write. READ_COMMITTED and REPEATABLE_READ are the two isolation modes offered in Red Hat JBoss Data Grid. READ_COMMITTED . This isolation level is applicable to a wide variety of requirements. This is the default value in Remote Client-Server and Library modes. REPEATABLE_READ . Important The only valid value for locks in Remote Client-Server mode is the default READ_COMMITTED value. The value explicitly specified with the isolation value is ignored. If the locking element is not present in the configuration, the default isolation value is READ_COMMITTED . For isolation mode configuration examples in JBoss Data Grid, see the lock striping configuration samples: See Section 15.2, "Configure Lock Striping (Remote Client-Server Mode)" for a Remote Client-Server mode configuration sample. See Section 15.3, "Configure Lock Striping (Library Mode)" for a Library mode configuration sample. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-set_up_isolation_levels
7.107. ledmon
7.107. ledmon 7.107.1. RHBA-2013:0479 - ledmon bug fix and enhancement update Updated ledmon packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The ledmon and ledctl are user space applications designed to control LEDs associated with each slot in an enclosure or a drive bay. There are two types of system: 2-LED system (Activity LED, Status LED) and 3-LED system (Activity LED, Locate LED, Fail LED). User must have root privileges to use this application. Note The ledmon package has been upgraded to upstream version 0.72., which provides a number of bug fixes and enhancements over the version. (BZ#817974) Users of ledmon are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ledmon
Chapter 3. Common automation execution environment scenarios
Chapter 3. Common automation execution environment scenarios Use the following example definition files to address common configuration scenarios. 3.1. Updating the automation hub CA certificate Use this example to customize the default definition file to include a CA certificate to the additional-build-files section, move the file to the appropriate directory and, finally, run the command to update the dynamic configuration of CA certificates to allow the system to trust this CA certificate. Prerequisites A custom CA certificate, for example rootCA.crt . Note Customizing the CA certificate using prepend_base means that the resulting CA configuration appears in all other build stages and the final image, because all other build stages inherit from the base image. additional_build_files: # copy the CA public key into the build context, we will copy and use it in the base image later - src: files/rootCA.crt dest: configs additional_build_steps: prepend_base: # copy a custom CA cert into the base image and recompute the trust database # because this is in "base", all stages will inherit (including the final EE) - COPY _build/configs/rootCA.crt /usr/share/pki/ca-trust-source/anchors - RUN update-ca-trust options: package_manager_path: /usr/bin/microdnf # downstream images use non-standard package manager [galaxy] server_list = automation_hub 3.2. Using automation hub authentication details when building automation execution environments Use the following example to customize the default definition file to pass automation hub authentication details into the automation execution environment build without exposing them in the final automation execution environment image. Prerequisites You have created an automation hub API token and stored it in a secure location, for example in a file named token.txt . Define a build argument that gets populated with the automation hub API token: export ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN=USD(cat <token.txt>) additional_build_steps: prepend_galaxy: # define a custom build arg env passthru- we still also have to pass # `--build-arg ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN` to get it to pick it up from the host env - ARG ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN - ENV ANSIBLE_GALAXY_SERVER_LIST=automation_hub - ENV ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_URL=https://console.redhat.com/api/automation-hub/content/<yourhuburl>-synclist/ - ENV ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_AUTH_URL=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token 3.3. Additional resources For information regarding the different parts of an automation execution environment definition file, see Breakdown of definition file content . For additional example definition files for common scenarios, see Common scenarios section of the Ansible Builder Documentation
[ "additional_build_files: # copy the CA public key into the build context, we will copy and use it in the base image later - src: files/rootCA.crt dest: configs additional_build_steps: prepend_base: # copy a custom CA cert into the base image and recompute the trust database # because this is in \"base\", all stages will inherit (including the final EE) - COPY _build/configs/rootCA.crt /usr/share/pki/ca-trust-source/anchors - RUN update-ca-trust options: package_manager_path: /usr/bin/microdnf # downstream images use non-standard package manager [galaxy] server_list = automation_hub", "export ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN=USD(cat <token.txt>)", "additional_build_steps: prepend_galaxy: # define a custom build arg env passthru- we still also have to pass # `--build-arg ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN` to get it to pick it up from the host env - ARG ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN - ENV ANSIBLE_GALAXY_SERVER_LIST=automation_hub - ENV ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_URL=https://console.redhat.com/api/automation-hub/content/<yourhuburl>-synclist/ - ENV ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_AUTH_URL=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/creating_and_consuming_execution_environments/assembly-common-ee-scenarios
Chapter 2. Deploying OpenShift Data Foundation on Google Cloud
Chapter 2. Deploying OpenShift Data Foundation on Google Cloud You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Google Cloud installer-provisioned infrastructure. This enables you to create internal cluster resources and it results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling and disabling key rotation when using KMS Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS. 2.3.1.1. Enabling key rotation To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to PersistentVolumeClaims , Namespace , or StorageClass (in the decreasing order of precedence). <value> can be @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.3.1.2. Disabling key rotation You can disable key rotation for the following: All the persistent volume claims (PVCs) of storage class A specific PVC Disabling key rotation for all PVCs of a storage class To disable key rotation for all PVCs, update the annotation of the storage class: Disabling key rotation for a specific persistent volume claim Identify the EncryptionKeyRotationCronJob CR for the PVC you want to disable key rotation on: Where <PVC_NAME> is the name of the PVC that you want to disable. Apply the following to the EncryptionKeyRotationCronJob CR from the step to disable the key rotation: Update the csiaddons.openshift.io/state annotation from managed to unmanaged : Where <encryptionkeyrotationcronjob_name> is the name of the EncryptionKeyRotationCronJob CR. Add suspend: true under the spec field: Save and exit. The key rotation will be disabled for the PVC. 2.4. Creating OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . Be aware that the default storage class of the Google Cloud platform uses hard disk drive (HDD). To use solid state drive (SSD) based disks for better performance, you need to create a storage class, using pd-ssd as shown in the following ssd-storeageclass.yaml example: Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set as standard . However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.5.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) ceph-csi-operator ceph-csi-controller-manager-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "oc get namespace default NAME STATUS AGE default Active 5d2h", "oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated", "oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h", "oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated", "oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s", "oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h", "oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/enable: false\" storageclass.storage.k8s.io/rbd-sc annotated", "oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'", "oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true", "oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{\"spec\": {\"suspend\": true}}' --type=merge.", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: faster provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete", "patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_google_cloud/deploying_openshift_data_foundation_on_google_cloud
Chapter 27. BPMN process fluent API for Business Central processes
Chapter 27. BPMN process fluent API for Business Central processes Red Hat Decision Manager provides a BPMN process fluent API that enables you to create business processes using factories. You can also manually validate the business process that you created using process fluent API. The process fluent API is defined in the org.kie.api.fluent package. Therefore, instead of using BPMN2 XML standard, you can use the process fluent API to create business processes in a few lines of code. 27.1. Example requests with the BPMN process fluent API The following example includes BPMN process fluent API requests for basic interactions with a business process. For more examples, download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-knowledge-USDVERSION/kie-api/src/main/java/org/kie/api/fluent . Creating and interacting with Business Central business processes The following example shows basic business process with a script task, an exception handler, and a variable: Example request to create and interact with a Business Central business process Process process = // Create process builder factory.processBuilder(processId) // package and name .packageName("org.jbpm") .name("My process") // start node .startNode(1).name("Start").done() // Add variable of type string .variable(var("pepe", String.class)) // Add exception handler .exceptionHandler(IllegalArgumentException.class, Dialect.JAVA, "System.out.println(\"Exception\");") // script node in Java language that prints "action" .actionNode(2).name("Action") .action(Dialect.JAVA, "System.out.println(\"Action\");").done() // end node .endNode(3).name("End").done() // connections .connection(1, 2) .connection(2, 3) .build(); In this example, a ProcessBuilderFactory reference is obtained and then, using processBuilder(String processId) method, a ProcessBuilder instance is created, which is associated with the given process Id. The ProcessBuilder instance enables you to build a definition of the created process using the fluent API. A business process consists of three components: Header: The header section contains global elements such as the name of the process, imports, and variables. In the example, the header contains the name and version of the process and the package name. Nodes: The nodes section contains all the different nodes that are part of the process. In the example, nodes are added to the process by calling the startNode() , actionNode() , and endNode() methods. These methods return a specific NodeBuilder that allows you to set the properties of that node. After the code finishes configuring that specific node, the done() method returns the NodeContainerBuilder to add more nodes, if necessary. Connections: The connections section links the nodes to create a flow chart. In the example, once you add all the nodes, you must connect them by creating connections between them. You can call the connection() method, which links the nodes. Finally, you can call the build() method and obtain the generated process definition. The build() method also validates the process definition and throws an exception if the process definition is not valid. 27.2. Example requests to execute a business process Once you create a valid process definition instance, you can execute it using a combination of public and internal KIE APIs. To execute a process, create a Resource , which is used to create a KieBase . Using the KieBase , you can create a KieSession to execute the process. The following example uses ProcessBuilderFactory.toBytes process to create a ByteArrayResource resource. Example request to execute a process // Build resource from Process KieResources resources = ServiceRegistry.getInstance().get(KieResources.class); Resource res = resources .newByteArrayResource(factory.toBytes(process)) .setSourcePath("/tmp/processFactory.bpmn2"); // source path or target path must be set to be added into kbase // Build kie base from this resource using KIE API KieServices ks = KieServices.Factory.get(); KieRepository kr = ks.getRepository(); KieFileSystem kfs = ks.newKieFileSystem(); kfs.write(res); KieBuilder kb = ks.newKieBuilder(kfs); kb.buildAll(); // kieModule is automatically deployed to KieRepository if successfully built. KieContainer kContainer = ks.newKieContainer(kr.getDefaultReleaseId()); KieBase kbase = kContainer.getKieBase(); // Create kie session using KieBase KieSessionConfiguration conf = ...; Environment env = ....; KieSession ksession = kbase.newKieSession(conf,env); // execute process using same process Id that is used to obtain ProcessBuilder instance ksession.startProcess(processId)
[ "Process process = // Create process builder factory.processBuilder(processId) // package and name .packageName(\"org.jbpm\") .name(\"My process\") // start node .startNode(1).name(\"Start\").done() // Add variable of type string .variable(var(\"pepe\", String.class)) // Add exception handler .exceptionHandler(IllegalArgumentException.class, Dialect.JAVA, \"System.out.println(\\\"Exception\\\");\") // script node in Java language that prints \"action\" .actionNode(2).name(\"Action\") .action(Dialect.JAVA, \"System.out.println(\\\"Action\\\");\").done() // end node .endNode(3).name(\"End\").done() // connections .connection(1, 2) .connection(2, 3) .build();", "// Build resource from Process KieResources resources = ServiceRegistry.getInstance().get(KieResources.class); Resource res = resources .newByteArrayResource(factory.toBytes(process)) .setSourcePath(\"/tmp/processFactory.bpmn2\"); // source path or target path must be set to be added into kbase // Build kie base from this resource using KIE API KieServices ks = KieServices.Factory.get(); KieRepository kr = ks.getRepository(); KieFileSystem kfs = ks.newKieFileSystem(); kfs.write(res); KieBuilder kb = ks.newKieBuilder(kfs); kb.buildAll(); // kieModule is automatically deployed to KieRepository if successfully built. KieContainer kContainer = ks.newKieContainer(kr.getDefaultReleaseId()); KieBase kbase = kContainer.getKieBase(); // Create kie session using KieBase KieSessionConfiguration conf = ...; Environment env = ....; KieSession ksession = kbase.newKieSession(conf,env); // execute process using same process Id that is used to obtain ProcessBuilder instance ksession.startProcess(processId)" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/bpmn-fluent-api-con_kie-apis
Chapter 5. Migration
Chapter 5. Migration This chapter provides information on migrating to versions of components included in Red Hat Software Collections 3.7. 5.1. Migrating to MariaDB 10.5 The rh-mariadb105 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mariadb105 Software Collection does not conflict with the mysql or mariadb packages from the core systems. Unless the *-syspaths packages are installed (see below), it is possible to install the rh-mariadb105 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb105 Software Collection while the rh-mariadb103 Collection is still installed and even running. The rh-mariadb105 Software Collection includes the rh-mariadb105-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb105*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb105* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb103 and rh-mysql80 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . The recommended migration path from MariaDB 5.5 to MariaDB 10.5 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively. For details, see instructions in earlier Red Hat Software Collections Release Notes: Migrating to MariaDB 10.0 , Migrating to MariaDB 10.1 , Migrating to MariaDB 10.2 , and Migrating to MariaDB 10.3 . Note that MariaDB 10.4 is not available as a Software Collection, so you must migrate directly from rh-mariadb103 to rh-mariadb105 . Note The rh-mariadb105 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.1.1. Notable Differences Between the rh-mariadb103 and rh-mariadb105 Software Collections Significant changes between MariaDB 10.3 and MariaDB 10.5 include: MariaDB now uses the unix_socket authentication plug-in by default. The plug-in enables users to use operating system credentials when connecting to MariaDB through the local Unix socket file. MariaDB adds mariadb-* named binaries and mysql* symbolic links pointing to the mariadb-* binaires. For example, the mysqladmin , mysqlaccess , and mysqlshow symlinks point to the mariadb-admin , mariadb-access , and mariadb-show binaries, respectively. The SUPER privilege has been split into several privileges to better align with each user role. As a result, certain statements have changed required privileges. In parallel replication, the slave_parallel_mode now defaults to optimistic . In the InnoDB storage engine, defaults of the following variables have been changed: innodb_adaptive_hash_index to OFF and innodb_checksum_algorithm to full_crc32 . MariaDB now uses the libedit implementation of the underlying software managing the MariaDB command history (the .mysql_history file) instead of the previously used readline library. This change impacts users working directly with the .mysql_history file. Note that .mysql_history is a file managed by the MariaDB or MySQL applications, and users should not work with the file directly. The human-readable appearance is coincidental. Note To increase security, you can consider not maintaining a history file. To disable the command history recording: Remove the .mysql_history file if it exists. Use either of the following approaches: Set the MYSQL_HISTFILE variable to /dev/null and include this setting in any of your shell's startup files. Change the .mysql_history file to a symbolic link to /dev/null : ln -s /dev/null USDHOME/.mysql_history MariaDB Galera Cluster has been upgraded to version 4 with the following notable changes: Galera adds a new streaming replication feature, which supports replicating transactions of unlimited size. During an execution of streaming replication, a cluster replicates a transaction in small fragments. Galera now fully supports Global Transaction ID (GTID). The default value for the wsrep_on option in the /etc/my.cnf.d/galera.cnf file has changed from 1 to 0 to prevent end users from starting wsrep replication without configuring required additional options. Changes to the PAM plug-in in MariaDB 10.5 include: MariaDB 10.5 adds a new version of the Pluggable Authentication Modules (PAM) plug-in. The PAM plug-in version 2.0 performs PAM authentication using a separate setuid root helper binary, which enables MariaDB to utilize additional PAM modules. The helper binary can be executed only by users in the mysql group. By default, the group contains only the mysql user. Red Hat recommends that administrators do not add more users to the mysql group to prevent password-guessing attacks without throttling or logging through this helper utility. In MariaDB 10.5 , the Pluggable Authentication Modules (PAM) plug-in and its related files have been moved to a new subpackage, mariadb-pam . As a result, no new setuid root binary is introduced on systems that do not use PAM authentication for MariaDB . The rh-mariadb105-mariadb-pam package contains both PAM plug-in versions: version 2.0 is the default, and version 1.0 is available as the auth_pam_v1 shared object library. The rh-mariadb105-mariadb-pam package is not installed by default with the MariaDB server. To make the PAM authentication plug-in available in MariaDB 10.5 , install the rh-mariadb105-mariadb-pam package manually. For more information, see the upstream documentation about changes in MariaDB 10.4 and changes in MariaDB 10.5 . See also upstream information about upgrading to MariaDB 10.4 and upgrading to MariaDB 10.5 . 5.1.2. Upgrading from the rh-mariadb103 to the rh-mariadb105 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb103 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb103 server: systemctl stop rh-mariadb103-mariadb.service Install the rh-mariadb105 Software Collection, including the subpackage providing the mysql_upgrade utility: yum install rh-mariadb105-mariadb-server rh-mariadb105-mariadb-server-utils Note that it is possible to install the rh-mariadb105 Software Collection while the rh-mariadb103 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb105 , which is stored in the /etc/opt/rh/rh-mariadb105/my.cnf file and the /etc/opt/rh/rh-mariadb105/my.cnf.d/ directory. Compare it with configuration of rh-mariadb103 stored in /etc/opt/rh/rh-mariadb103/my.cnf and /etc/opt/rh/rh-mariadb103/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb103 Software Collection is stored in the /var/opt/rh/rh-mariadb103/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb105/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data is owned by the mysql user and SELinux context is correct. Start the rh-mariadb105 database server: systemctl start rh-mariadb105-mariadb.service Perform the data migration. Note that running the mysql_upgrade command is required due to upstream changes introduced in MDEV-14637 . scl enable rh-mariadb105 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password: scl enable rh-mariadb105 -- mysql_upgrade -p Note that when the rh-mariadb105*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb103 and rh-mysql80 Software Collections. 5.2. Migrating to MySQL 8.0 The rh-mysql80 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mysql80 Software Collection conflicts neither with the mysql or mariadb packages from the core systems nor with the rh-mysql* or rh-mariadb* Software Collections, unless the *-syspaths packages are installed (see below). It is also possible to run multiple versions at the same time; however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Note that it is possible to upgrade to MySQL 8.0 only from MySQL 5.7 . If you need to upgrade from an earlier version, upgrade to MySQL 5.7 first. For instructions, see Migration to MySQL 5.7 . 5.2.1. Notable Differences Between MySQL 5.7 and MySQL 8.0 Differences Specific to the rh-mysql80 Software Collection The MySQL 8.0 server provided by the rh-mysql80 Software Collection is configured to use mysql_native_password as the default authentication plug-in because client tools and libraries in Red Hat Enterprise Linux 7 are incompatible with the caching_sha2_password method, which is used by default in the upstream MySQL 8.0 version. To change the default authentication plug-in to caching_sha2_password , edit the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows: For more information about the caching_sha2_password authentication plug-in, see the upstream documentation . The rh-mysql80 Software Collection includes the rh-mysql80-syspaths package, which installs the rh-mysql80-mysql-config-syspaths , rh-mysql80-mysql-server-syspaths , and rh-mysql80-mysql-syspaths packages. These subpackages provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mysql80*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mysql80* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb103 and rh-mariadb105 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . General Changes in MySQL 8.0 Binary logging is enabled by default during the server startup. The log_bin system variable is now set to ON by default even if the --log-bin option has not been specified. To disable binary logging, specify the --skip-log-bin or --disable-log-bin option at startup. For a CREATE FUNCTION statement to be accepted, at least one of the DETERMINISTIC , NO SQL , or READS SQL DATA keywords must be specified explicitly, otherwise an error occurs. Certain features related to account management have been removed. Namely, using the GRANT statement to modify account properties other than privilege assignments, such as authentication, SSL, and resource-limit, is no longer possible. To establish the mentioned properties at account-creation time, use the CREATE USER statement. To modify these properties, use the ALTER USER statement. Certain SSL-related options have been removed on the client-side. Use the --ssl-mode=REQUIRED option instead of --ssl=1 or --enable-ssl . Use the --ssl-mode=DISABLED option instead of --ssl=0 , --skip-ssl , or --disable-ssl . Use the --ssl-mode=VERIFY_IDENTITY option instead of --ssl-verify-server-cert options. Note that these option remains unchanged on the server side. The default character set has been changed from latin1 to utf8mb4 . The utf8 character set is currently an alias for utf8mb3 but in the future, it will become a reference to utf8mb4 . To prevent ambiguity, specify utf8mb4 explicitly for character set references instead of utf8 . Setting user variables in statements other than SET has been deprecated. The log_syslog variable, which previously configured error logging to the system logs, has been removed. Certain incompatible changes to spatial data support have been introduced. The deprecated ASC or DESC qualifiers for GROUP BY clauses have been removed. To produce a given sort order, provide an ORDER BY clause. For detailed changes in MySQL 8.0 compared to earlier versions, see the upstream documentation: What Is New in MySQL 8.0 and Changes Affecting Upgrades to MySQL 8.0 . 5.2.2. Upgrading to the rh-mysql80 Software Collection Important Prior to upgrading, back-up all your data, including any MySQL databases. Install the rh-mysql80 Software Collection. yum install rh-mysql80-mysql-server Inspect the configuration of rh-mysql80 , which is stored in the /etc/opt/rh/rh-mysql80/my.cnf file and the /etc/opt/rh/rh-mysql80/my.cnf.d/ directory. Compare it with the configuration of rh-mysql57 stored in /etc/opt/rh/rh-mysql57/my.cnf and /etc/opt/rh/rh-mysql57/my.cnf.d/ and adjust it if necessary. Stop the rh-mysql57 database server, if it is still running. systemctl stop rh-mysql57-mysqld.service All data of the rh-mysql57 Software Collection is stored in the /var/opt/rh/rh-mysql57/lib/mysql/ directory. Copy the whole content of this directory to /var/opt/rh/rh-mysql80/lib/mysql/ . You can also move the content but remember to back up your data before you continue to upgrade. Start the rh-mysql80 database server. systemctl start rh-mysql80-mysqld.service Perform the data migration. scl enable rh-mysql80 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mysql80 -- mysql_upgrade -p Note that when the rh-mysql80*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb103 and rh-mariadb105 Software Collections. 5.3. Migrating to PostgreSQL 13 Red Hat Software Collections 3.7 is distributed with PostgreSQL 13 , available only for Red Hat Enterprise Linux 7. The rh-postgresql13 Software Collection can be safely installed on the same machine in parallel with the base Red Hat Enterprise Linux system version of PostgreSQL or any PostgreSQL Software Collection. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. The rh-postgresql13 Software Collection includes the rh-postgresql13-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgresql13*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgresql13* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Important Before migrating to PostgreSQL 13 , see the upstream compatibility notes for PostgreSQL 13 . In case of upgrading the PostgreSQL database in a container, see the container-specific instructions . The following table provides an overview of different paths in a Red Hat Enterprise Linux 7 system version of PostgreSQL provided by the postgresql package, and in the rh-postgresql12 and rh-postgresql13 Software Collections. Table 5.1. Diferences in the PostgreSQL paths Content postgresql rh-postgresql12 rh-postgresql13 Executables /usr/bin/ /opt/rh/rh-postgresql12/root/usr/bin/ /opt/rh/rh-postgresql13/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/rh-postgresql12/root/usr/lib64/ /opt/rh/rh-postgresql13/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql13/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql13/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql13/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed Data /var/lib/pgsql/data/ /var/opt/rh/rh-postgresql12/lib/pgsql/data/ /var/opt/rh/rh-postgresql13/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql12/lib/pgsql/backups/ /var/opt/rh/rh-postgresql13/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/rh-postgresql12/root/usr/share/pgsql/ /opt/rh/rh-postgresql13/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/rh-postgresql12/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql13/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/rh-postgresql12/root/usr/include/pgsql/ /opt/rh/rh-postgresql13/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/rh-postgresql12/root/usr/share/pgsql/ /opt/rh/rh-postgresql13/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql12/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql13/root/usr/lib64/pgsql/test/regress/ (in the -test package) 5.3.1. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 13 Software Collection Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql13 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 13, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.1. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop postgresql.service To verify that the server is not running, type: systemctl status postgresql.service Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql13/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql13/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 13 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql13/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql13 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql13/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql13-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql13-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql13 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 13 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 13 server, type as root : chkconfig rh-postgresql13-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql13/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.2. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : systemctl stop postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql13 -- postgresql-setup initdb Start the new server as root : systemctl start rh-postgresql13-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql13 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 13 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 13 server, type as root : chkconfig rh-postgresql13-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql13/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.3.2. Migrating from the PostgreSQL 12 Software Collection to the PostgreSQL 13 Software Collection To migrate your data from the rh-postgresql12 Software Collection to the rh-postgresql13 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 12 to PostgreSQL 13 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql12/lib/pgsql/data/ directory. Procedure 5.3. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop rh-postgresql12-postgresql.service To verify that the server is not running, type: systemctl status rh-postgresql12-postgresql.service Verify that the old directory /var/opt/rh/rh-postgresql12/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql12/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql13/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql13/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 13 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql13/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql13 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql12-postgresql Alternatively, you can use the /opt/rh/rh-postgresql13/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql12-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql13-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql13-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql13 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 13 server to start automatically at boot time. To disable the old PostgreSQL 12 server, type the following command as root : chkconfig rh-postgresql12-postgreqsql off To enable the PostgreSQL 13 server, type as root : chkconfig rh-postgresql13-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql13/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.4. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start rh-postgresql12-postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql12 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : systemctl stop rh-postgresql12-postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql13 -- postgresql-setup initdb Start the new server as root : systemctl start rh-postgresql13-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql13 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 13 server to start automatically at boot time. To disable the old PostgreSQL 12 server, type the following command as root : chkconfig rh-postgresql12-postgresql off To enable the PostgreSQL 13 server, type as root : chkconfig rh-postgresql13-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql13/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.4. Migrating to nginx 1.18 The root directory for the rh-nginx118 Software Collection is located in /opt/rh/rh-nginx118/root/ . The error log is stored in /var/opt/rh/rh-nginx118/log/nginx by default. Configuration files are stored in the /etc/opt/rh/rh-nginx118/nginx/ directory. Configuration files in nginx 1.18 have the same syntax and largely the same format as nginx Software Collections. Configuration files (with a .conf extension) in the /etc/opt/rh/rh-nginx118/nginx/default.d/ directory are included in the default server block configuration for port 80 . Important Before upgrading from nginx 1.16 to nginx 1.18 , back up all your data, including web pages located in the /opt/rh/nginx116/root/ tree and configuration files located in the /etc/opt/rh/nginx116/nginx/ tree. If you have made any specific changes, such as changing configuration files or setting up web applications, in the /opt/rh/nginx116/root/ tree, replicate those changes in the new /opt/rh/rh-nginx118/root/ and /etc/opt/rh/rh-nginx118/nginx/ directories, too. You can use this procedure to upgrade directly from nginx 1.12 or nginx 1.14 to nginx 1.18 . Use the appropriate paths in this case. For the official nginx documentation, refer to http://nginx.org/en/docs/ . 5.5. Migrating to Redis 5 Redis 3.2 , provided by the rh-redis32 Software Collection, is mostly a strict subset of Redis 4.0 , which is mostly a strict subset of Redis 5.0 . Therefore, no major issues should occur when upgrading from version 3.2 to version 5.0. To upgrade a Redis Cluster to version 5.0, a mass restart of all the instances is needed. Compatibility Notes The format of RDB files has been changed. Redis 5 is able to read formats of all the earlier versions, but earlier versions are incapable of reading the Redis 5 format. Since version 4.0, the Redis Cluster bus protocol is no longer compatible with Redis 3.2 . For minor non-backward compatible changes, see the upstream release notes for version 4.0 and version 5.0 .
[ "[mysqld] default_authentication_plugin=caching_sha2_password" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.7_release_notes/chap-Migration
Chapter 6. Enable cluster to auto-start after reboot
Chapter 6. Enable cluster to auto-start after reboot The cluster is not yet enabled to auto-start after reboot. System admin needs to manually start the cluster after the node is fenced and rebooted. After testing the section, when everything works fine, enable the cluster to auto-start after reboot: [root@s4node1]# pcs cluster enable --all Note : In some situations it can be beneficial not to have the cluster auto-start after a node has been rebooted. For example, if there is an issue with a filesystem that is required by a cluster resource, and the filesystem needs to be repaired first before it can be used again, having the cluster auto-start can be a failure because the filesystem doesn't work. This can cause even more trouble. Now please rerun the tests in the section to make sure that the cluster still works fine. Please note that in section 5.3, there is no need to run command pcs cluster start after a node is rebooted. Cluster should automatically start after reboot. By this point you have successfully configured a two-node cluster for ENSA2. You can either continue with intensive testing to get ready for production or optionally add more nodes to the cluster.
[ "pcs cluster enable --all" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_a_cost-optimized_sap_s4hana_ha_cluster_hana_system_replication_ensa2_using_the_rhel_ha_add-on/asmb_cco_auto_start_configuring-cost-optimized-sap
Chapter 20. Setting scheduler priorities
Chapter 20. Setting scheduler priorities Red Hat Enterprise Linux for Real Time kernel allows fine-grained control of scheduler priorities. It also allows application-level programs to be scheduled at a higher priority than kernel threads. Warning Setting scheduler priorities can carry consequences and may cause the system to become unresponsive or behave unpredictably if crucial kernel processes are prevented from running as needed. Ultimately, the correct settings are workload-dependent. 20.1. Viewing thread scheduling priorities Thread priorities are set using a series of levels, ranging from 0 (lowest priority) to 99 (highest priority). The systemd service manager can be used to change the default priorities of threads after the kernel boots. Procedure To view scheduling priorities of running threads, use the tuna utility: 20.2. Changing the priority of services during booting Using systemd , you can set up real-time priority for services launched during the boot process. Unit configuration directives are used to change the priority of a service during boot process. The boot process priority change is done by using the following directives in the service section of /etc/systemd/system/ service .service.d/priority.conf : CPUSchedulingPolicy= Sets the CPU scheduling policy for executed processes. Takes one of the scheduling classes available on Linux: other batch idle fifo rr CPUSchedulingPriority= Sets the CPU scheduling priority for an executed processes. The available priority range depends on the selected CPU scheduling policy. For real-time scheduling policies, an integer between 1 (lowest priority) and 99 (highest priority) can be used. Prerequisites You have administrator privileges. A service that runs on boot. Procedure For an existing service: Create a supplementary service configuration directory file for the service. Add the scheduling policy and priority to the file in the [Service] section. For example: Reload the systemd scripts configuration. Restart the service. Verification Display the service's priority. The output shows the configured priority of the service. For example: Additional resources Working with systemd unit files 20.3. Configuring the CPU usage of a service Using systemd , you can specify the CPUs on which services can run. Prerequisites You have administrator privileges. Procedure Create a supplementary service configuration directory file for the service. Add the CPUs to use for the service to the file using the CPUAffinity attribute in the [Service] section. For example: Reload the systemd scripts configuration. Restart the service. Verification Display the CPUs to which the specified service is limited. where service is the specified service. The following output shows that the mcelog service is limited to CPUs 0 and 1. 20.4. Priority map Scheduler priorities are defined in groups, with some groups dedicated to particular kernel functions. Table 20.1. Thread priority table Priority Threads Description 1 Low priority kernel threads This priority is usually reserved for the tasks that need to be just above SCHED_OTHER . 2 - 49 Available for use The range used for typical application priorities. 50 Default hard-IRQ value This priority is the default value for hardware-based interrupts. 51 - 98 High priority threads Use this range for threads that execute periodically and must have quick response times. Do not use this range for CPU-bound threads, because it will prevent responses to lower level interrupts. 99 Watchdogs and migration System threads that must run at the highest priority. 20.5. Additional resources Working with systemd unit files
[ "tuna --show_threads thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 2 OTHER 0 0xfff 451 3 kthreadd 3 FIFO 1 0 46395 2 ksoftirqd/0 5 OTHER 0 0 11 1 kworker/0:0H 7 FIFO 99 0 9 1 posixcputmr/0 ...[output truncated]", "cat <<-EOF > /etc/systemd/system/mcelog.service.d/priority.conf", "[Service] CPUSchedulingPolicy=fifo CPUSchedulingPriority=20 EOF", "systemctl daemon-reload", "systemctl restart mcelog", "tuna -t mcelog -P", "thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 826 FIFO 20 0,1,2,3 13 0 mcelog", "md sscd", "[Service] CPUAffinity=0,1 EOF", "systemctl daemon-reload", "systemctl restart service", "tuna -t mcelog -P", "thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 12954 FIFO 20 0,1 2 1 mcelog" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_viewing-scheduling-priorities-of-running-threads_optimizing-rhel9-for-real-time-for-low-latency-operation
Eclipse Plugin Guide
Eclipse Plugin Guide Migration Toolkit for Runtimes 1.2 Identify and resolve migration issues by analyzing your applications with the MTR plugin for Eclipse. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/eclipse_plugin_guide/index
Chapter 8. Setting up RHACS Cloud Service with Kubernetes secured clusters
Chapter 8. Setting up RHACS Cloud Service with Kubernetes secured clusters 8.1. Creating an RHACS Cloud Service instance for Kubernetes clusters Access Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) by selecting an instance in the Red Hat Hybrid Cloud Console. An ACS instance contains the RHACS Cloud Service management interface and services that Red Hat configures and manages for you. The management interface connects to your secured clusters, which contain the services that scan and collect information about vulnerabilities. One instance can connect to and monitor many clusters. 8.1.1. Creating an instance in the console In the Red Hat Hybrid Cloud Console, create an ACS instance to connect to your secured clusters. Procedure To create an ACS instance : Log in to the Red Hat Hybrid Cloud Console. From the navigation menu, select Advanced Cluster Security ACS Instances . Select Create ACS instance and enter information into the displayed fields or select the appropriate option from the drop-down list: Name : Enter the name of your ACS instance . An ACS instance contains the RHACS Central component, also referred to as "Central", which includes the RHACS Cloud Service management interface and services that are configured and managed by Red Hat. You manage your secured clusters that communicate with Central. You can connect many secured clusters to one instance. Cloud provider : The cloud provider where Central is located. Select AWS . Cloud region : The region for your cloud provider where Central is located. Select one of the following regions: US-East, N. Virginia Europe, Ireland Availability zones : Use the default value ( Multi ). Click Create instance . 8.1.2. steps On each Kubernetes cluster you want to secure, install secured cluster resources by using Helm charts or the roxctl CLI. 8.2. Generating an init bundle for Kubernetes secured clusters Before you install the SecuredCluster resource on a cluster, you must create an init bundle. The cluster that has SecuredCluster installed and configured then uses this bundle to authenticate with the ACS Console. You can create an init bundle by using either the RHACS portal or the roxctl CLI. You then apply the init bundle by using it to create resources. 8.2.1. Generating an init bundle by using the RHACS portal You can create an init bundle containing secrets by using the RHACS portal. Note You must have the Admin user role to create an init bundle. Procedure Find the address of the RHACS portal as described in "Verifying Central installation using the Operator method". Log in to the RHACS portal. If you do not have secured clusters, the Platform Configuration Clusters page appears. Click Create init bundle . Enter a name for the cluster init bundle. Select your platform. Select the installation method you will use for your secured clusters: Operator or Helm chart . Click Download to generate and download the init bundle, which is created in the form of a YAML file. You can use one init bundle and its corresponding YAML file for all secured clusters if you are using the same installation method. Important Store this bundle securely because it contains secrets. Apply the init bundle by using it to create resources on the secured cluster. Install secured cluster services on each cluster. 8.2.2. Generating an init bundle by using the roxctl CLI You can create an init bundle with secrets by using the roxctl CLI. Note You must have the Admin user role to create init bundles. Prerequisites You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables: Set the ROX_API_TOKEN by running the following command: USD export ROX_API_TOKEN=<api_token> Set the ROX_CENTRAL_ADDRESS environment variable by running the following command: USD export ROX_CENTRAL_ADDRESS=<address>:<port_number> Important In RHACS Cloud Service, when using roxctl commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com instead of acs-data-ABCD12345.acs.rhcloud.com . Procedure To generate a cluster init bundle containing secrets for Helm installations, run the following command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" \ central init-bundles generate --output \ <cluster_init_bundle_name> cluster_init_bundle.yaml To generate a cluster init bundle containing secrets for Operator installations, run the following command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" \ central init-bundles generate --output-secrets \ <cluster_init_bundle_name> cluster_init_bundle.yaml Important Ensure that you store this bundle securely because it contains secrets. You can use the same bundle to set up multiple secured clusters. 8.2.3. steps Creating resources by using the init bundle 8.3. Applying an init bundle for Kubernetes secured clusters Apply the init bundle by using it to create resources. 8.3.1. Applying the init bundle on the secured cluster Before you configure a secured cluster, you must apply the init bundle by using it to create the required resources on the secured cluster. Applying the init bundle allows the services on the secured cluster to communicate with RHACS Cloud Service. Note If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section. Prerequisites You must have generated an init bundle containing secrets. You must have created the stackrox project, or namespace, on the cluster where secured cluster services will be installed. Using stackrox for the project is not required, but ensures that vulnerabilities for RHACS processes are not reported when scanning your clusters. Procedure To create resources, perform only one of the following steps: Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, make sure that you are in the stackrox namespace. In the top menu, click + to open the Import YAML page. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create . When the command is complete, the display shows that the collector-tls , sensor-tls , and admission-control-tls` resources were created. Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources: USD oc create -f <init_bundle>.yaml \ 1 -n <stackrox> 2 1 Specify the file name of the init bundle containing the secrets. 2 Specify the name of the project where Central services are installed. Using the kubectl CLI, run the following commands to create the resources: USD kubectl create namespace stackrox 1 USD kubectl create -f <init_bundle>.yaml \ 2 -n <stackrox> 3 1 Create the project where secured cluster resources will be installed. This example uses stackrox . 2 Specify the file name of the init bundle containing the secrets. 3 Specify the project name that you created. This example uses stackrox . Verification Restart Sensor to pick up the new certificates. For more information about how to restart Sensor, see "Restarting the Sensor container" in the "Additional resources" section. 8.3.2. steps Install RHACS secured cluster services in all clusters that you want to monitor. 8.3.3. Additional resources Restarting the Sensor container 8.4. Installing secured cluster services from RHACS Cloud Service on Kubernetes clusters You can install RHACS Cloud Service on your secured clusters by using one of the following methods: By using Helm charts By using the roxctl CLI (do not use this method unless you have a specific installation need that requires using it) 8.4.1. Installing RHACS Cloud Service on secured clusters by using Helm charts You can install RHACS on secured clusters by using Helm charts with no customization, by using Helm charts with the default values, or by using Helm charts with customizations of configuration parameters. First, ensure that you add the Helm chart repository. 8.4.1.1. Adding the Helm chart repository Procedure Add the RHACS charts repository. USD helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/ The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including: Secured Cluster Services Helm chart ( secured-cluster-services ) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim). Note Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor. Verification Run the following command to verify the added chart repository: USD helm search repo -l rhacs/ 8.4.1.2. Installing RHACS Cloud Service on secured clusters by using Helm charts without customizations 8.4.1.2.1. Installing the secured-cluster-services Helm chart without customization Use the following instructions to install the secured-cluster-services Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim). Prerequisites You must have generated an RHACS init bundle for your cluster. You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io , see Red Hat Container Registry Authentication . You must have the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the ACS instance you created. 8.4.1.3. Configuring the secured-cluster-services Helm chart with customizations Procedure This section describes Helm chart configuration parameters that you can use with the helm install and helm upgrade commands. You can specify these parameters by using the --set option or by creating YAML configuration files. Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes: Public configuration file values-public.yaml : Use this file to save all non-sensitive configuration options. Private configuration file values-private.yaml : Use this file to save all sensitive configuration options. Ensure that you store this file securely. Important While using the secured-cluster-services Helm chart, do not modify the values.yaml file that is part of the chart. 8.4.1.3.1. Configuration parameters Parameter Description clusterName Name of your cluster. centralEndpoint Address of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with wss:// . When configuring multiple clusters, use the hostname for the address. For example, central.example.com . sensor.endpoint Address of the Sensor endpoint including port number. sensor.imagePullPolicy Image pull policy for the Sensor container. sensor.serviceTLS.cert The internal service-to-service TLS certificate that Sensor uses. sensor.serviceTLS.key The internal service-to-service TLS certificate key that Sensor uses. sensor.resources.requests.memory The memory request for the Sensor container. Use this parameter to override the default value. sensor.resources.requests.cpu The CPU request for the Sensor container. Use this parameter to override the default value. sensor.resources.limits.memory The memory limit for the Sensor container. Use this parameter to override the default value. sensor.resources.limits.cpu The CPU limit for the Sensor container. Use this parameter to override the default value. sensor.nodeSelector Specify a node selector label as label-key: label-value to force Sensor to only schedule on nodes with the specified label. sensor.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. image.main.name The name of the main image. image.collector.name The name of the Collector image. image.main.registry The address of the registry you are using for the main image. image.collector.registry The address of the registry you are using for the Collector image. image.scanner.registry The address of the registry you are using for the Scanner image. image.scannerDb.registry The address of the registry you are using for the Scanner DB image. image.scannerV4.registry The address of the registry you are using for the Scanner V4 image. image.scannerV4DB.registry The address of the registry you are using for the Scanner V4 DB image. image.main.pullPolicy Image pull policy for main images. image.collector.pullPolicy Image pull policy for the Collector images. image.main.tag Tag of main image to use. image.collector.tag Tag of collector image to use. collector.collectionMethod Either CORE_BPF or NO_COLLECTION . collector.imagePullPolicy Image pull policy for the Collector container. collector.complianceImagePullPolicy Image pull policy for the Compliance container. collector.disableTaintTolerations If you specify false , tolerations are applied to Collector, and the collector pods can schedule onto all nodes with taints. If you specify it as true , no tolerations are applied, and the collector pods are not scheduled onto nodes with taints. collector.resources.requests.memory The memory request for the Collector container. Use this parameter to override the default value. collector.resources.requests.cpu The CPU request for the Collector container. Use this parameter to override the default value. collector.resources.limits.memory The memory limit for the Collector container. Use this parameter to override the default value. collector.resources.limits.cpu The CPU limit for the Collector container. Use this parameter to override the default value. collector.complianceResources.requests.memory The memory request for the Compliance container. Use this parameter to override the default value. collector.complianceResources.requests.cpu The CPU request for the Compliance container. Use this parameter to override the default value. collector.complianceResources.limits.memory The memory limit for the Compliance container. Use this parameter to override the default value. collector.complianceResources.limits.cpu The CPU limit for the Compliance container. Use this parameter to override the default value. collector.serviceTLS.cert The internal service-to-service TLS certificate that Collector uses. collector.serviceTLS.key The internal service-to-service TLS certificate key that Collector uses. admissionControl.listenOnCreates This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for workload creation events. admissionControl.listenOnUpdates When you set this parameter as false , Red Hat Advanced Cluster Security for Kubernetes creates the ValidatingWebhookConfiguration in a way that causes the Kubernetes API server not to send object update events. Since the volume of object updates is usually higher than the object creates, leaving this as false limits the load on the admission control service and decreases the chances of a malfunctioning admission control service. admissionControl.listenOnEvents This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for Kubernetes exec and portforward events. RHACS does not support this feature on OpenShift Container Platform 3.11. admissionControl.dynamic.enforceOnCreates This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted. admissionControl.dynamic.enforceOnUpdates This setting controls the behavior of the admission control service. You must specify listenOnUpdates as true for this to work. admissionControl.dynamic.scanInline If you set this option to true , the admission control service requests an image scan before making an admission decision. Since image scans take several seconds, enable this option only if you can ensure that all images used in your cluster are scanned before deployment (for example, by a CI integration during image build). This option corresponds to the Contact image scanners option in the RHACS portal. admissionControl.dynamic.disableBypass Set it to true to disable bypassing the Admission controller. admissionControl.dynamic.timeout Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the ValidatingWebhookConfiguration . admissionControl.resources.requests.memory The memory request for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.requests.cpu The CPU request for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.limits.memory The memory limit for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.limits.cpu The CPU limit for the Admission Control container. Use this parameter to override the default value. admissionControl.nodeSelector Specify a node selector label as label-key: label-value to force Admission Control to only schedule on nodes with the specified label. admissionControl.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. admissionControl.serviceTLS.cert The internal service-to-service TLS certificate that Admission Control uses. admissionControl.serviceTLS.key The internal service-to-service TLS certificate key that Admission Control uses. registryOverride Use this parameter to override the default docker.io registry. Specify the name of your registry if you are using some other registry. collector.disableTaintTolerations If you specify false , tolerations are applied to Collector, and the Collector pods can schedule onto all nodes with taints. If you specify it as true , no tolerations are applied, and the Collector pods are not scheduled onto nodes with taints. createUpgraderServiceAccount Specify true to create the sensor-upgrader account. By default, Red Hat Advanced Cluster Security for Kubernetes creates a service account called sensor-upgrader in each secured cluster. This account is highly privileged but is only used during upgrades. If you do not create this account, you must complete future upgrades manually if the Sensor does not have enough permissions. createSecrets Specify false to skip the orchestrator secret creation for the Sensor, Collector, and Admission controller. collector.slimMode Deprecated. Specify true if you want to use a slim Collector image for deploying Collector. sensor.resources Resource specification for Sensor. admissionControl.resources Resource specification for Admission controller. collector.resources Resource specification for Collector. collector.complianceResources Resource specification for Collector's Compliance container. exposeMonitoring If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes exposes Prometheus metrics endpoints on port number 9090 for the Sensor, Collector, and the Admission controller. auditLogs.disableCollection If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes disables the audit log detection features used to detect access and modifications to configuration maps and secrets. scanner.disable If you set this option to false , Red Hat Advanced Cluster Security for Kubernetes deploys a Scanner-slim and Scanner DB in the secured cluster to allow scanning images on the integrated OpenShift image registry. Enabling Scanner-slim is supported on OpenShift Container Platform and Kubernetes secured clusters. Defaults to true . scanner.dbTolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. scanner.replicas Resource specification for Collector's Compliance container. scanner.logLevel Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes. scanner.autoscaling.disable If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes disables autoscaling on the Scanner deployment. scanner.autoscaling.minReplicas The minimum number of replicas for autoscaling. Defaults to 2. scanner.autoscaling.maxReplicas The maximum number of replicas for autoscaling. Defaults to 5. scanner.nodeSelector Specify a node selector label as label-key: label-value to force Scanner to only schedule on nodes with the specified label. scanner.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. scanner.dbNodeSelector Specify a node selector label as label-key: label-value to force Scanner DB to only schedule on nodes with the specified label. scanner.dbTolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. scanner.resources.requests.memory The memory request for the Scanner container. Use this parameter to override the default value. scanner.resources.requests.cpu The CPU request for the Scanner container. Use this parameter to override the default value. scanner.resources.limits.memory The memory limit for the Scanner container. Use this parameter to override the default value. scanner.resources.limits.cpu The CPU limit for the Scanner container. Use this parameter to override the default value. scanner.dbResources.requests.memory The memory request for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.requests.cpu The CPU request for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.limits.memory The memory limit for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.limits.cpu The CPU limit for the Scanner DB container. Use this parameter to override the default value. monitoring.openshift.enabled If you set this option to false , Red Hat Advanced Cluster Security for Kubernetes will not set up Red Hat OpenShift monitoring. Defaults to true on Red Hat OpenShift 4. network.enableNetworkPolicies To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where secured cluster resources are installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to False . This is a Boolean value. The default value is True , which means the default policies are automatically created. Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. 8.4.1.3.1.1. Environment variables You can specify environment variables for Sensor and Admission controller in the following format: customize: envVars: ENV_VAR1: "value1" ENV_VAR2: "value2" The customize setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads. The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment). 8.4.1.3.2. Installing the secured-cluster-services Helm chart with customizations After you configure the values-public.yaml and values-private.yaml files, install the secured-cluster-services Helm chart to deploy the following per-cluster and per-node components: Sensor Admission controller Collector Scanner: optional for secured clusters when the StackRox Scanner is installed Scanner DB: optional for secured clusters when the StackRox Scanner is installed Scanner V4 Indexer and Scanner V4 DB: optional for secured clusters when Scanner V4 is installed Prerequisites You must have generated an RHACS init bundle for your cluster. You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io , see Red Hat Container Registry Authentication . You must have the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created. Procedure Run the following command: USD helm install -n stackrox \ --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <name_of_cluster_init_bundle.yaml> \ -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \ 1 --set imagePullSecrets.username=<username> \ 2 --set imagePullSecrets.password=<password> 3 1 Use the -f option to specify the paths for your YAML configuration files. 2 Include the user name for your pull secret for Red Hat Container Registry authentication. 3 Include the password for your pull secret for Red Hat Container Registry authentication. Note To deploy secured-cluster-services Helm chart by using a continuous integration (CI) system, pass the init bundle YAML file as an environment variable to the helm install command: USD helm install ... -f <(echo "USDINIT_BUNDLE_YAML_SECRET") 1 1 If you are using base64 encoded variables, use the helm install ... -f <(echo "USDINIT_BUNDLE_YAML_SECRET" | base64 --decode) command instead. 8.4.1.4. Changing configuration options after deploying the secured-cluster-services Helm chart You can make changes to any configuration options after you have deployed the secured-cluster-services Helm chart. When using the helm upgrade command to make changes, the following guidelines and requirements apply: You can also specify configuration values using the --set or --set-file parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes. If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the helm upgrade command. The post-installation notes of the central-services Helm chart include a command for retrieving the automatically generated values. If the CA was generated outside of the Helm chart and provided during the installation of the central-services chart, then you must perform that action again when using the helm upgrade command, for example, by using the --reuse-values flag with the helm upgrade command. Procedure Update the values-public.yaml and values-private.yaml configuration files with new values. Run the helm upgrade command and specify the configuration files using the -f option: USD helm upgrade -n stackrox \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --reuse-values \ 1 -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml> 1 If you have modified values that are not included in the values_public.yaml and values_private.yaml files, include the --reuse-values parameter. 8.4.2. Installing RHACS on secured clusters by using the roxctl CLI To install RHACS on secured clusters by using the CLI, perform the following steps: Install the roxctl CLI. Install Sensor. 8.4.2.1. Installing the roxctl CLI You must first download the binary. You can install roxctl on Linux, Windows, or macOS. 8.4.2.1.1. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 8.4.2.1.2. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 8.4.2.1.3. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 8.4.2.2. Installing Sensor To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. This installation method is also called the manifest installation method. To perform an installation by using the manifest installation method, follow only one of the following procedures: Use the RHACS web portal to download the cluster bundle, and then extract and run the sensor script. Use the roxctl CLI to generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance. Prerequisites You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service). 8.4.2.2.1. Manifest installation method by using the web portal Procedure On your secured cluster, in the RHACS portal, go to Platform Configuration Clusters . Select Secure a cluster Legacy installation method . Specify a name for the cluster. Provide appropriate values for the fields based on where you are deploying the Sensor. Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created. Click to continue with the Sensor setup. Click Download YAML File and Keys to download the cluster bundle (zip archive). Important The cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster. From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle: USD unzip -d sensor sensor-<cluster_name>.zip USD ./sensor/sensor.sh If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help. After Sensor is deployed, it contacts Central and provides cluster information. 8.4.2.2.2. Manifest installation by using the roxctl CLI Procedure Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command: USD roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "USDROX_ENDPOINT" 1 1 For the --openshift-version option, specify the major OpenShift Container Platform version number for your cluster. For example, specify 3 for OpenShift Container Platform version 3.x and specify 4 for OpenShift Container Platform version 4.x . From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle: USD unzip -d sensor sensor-<cluster_name>.zip USD ./sensor/sensor.sh If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help. After Sensor is deployed, it contacts Central and provides cluster information. Verification Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration Clusters , the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems: On Kubernetes, enter the following command: USD kubectl get pod -n stackrox -w Click Finish to close the window. After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor. 8.5. Verifying installation of secured clusters After installing RHACS Cloud Service, you can perform some steps to verify that the installation was successful. To verify installation, access your ACS Console from the Red Hat Hybrid Cloud Console. The Dashboard displays the number of clusters that RHACS Cloud Service is monitoring, along with information about nodes, deployments, images, and violations. If no data appears in the ACS Console: Ensure that at least one secured cluster is connected to your RHACS Cloud Service instance. For more information, see instructions for installing by using Helm charts or by using the roxctl CLI . Examine your Sensor pod logs to ensure that the connection to your RHACS Cloud Service instance is successful. Examine the values in the SecuredCluster API in the Operator on your local cluster to ensure that the Central API Endpoint has been entered correctly. This value should be the same value as shown in the ACS instance details in the Red Hat Hybrid Cloud Console.
[ "export ROX_API_TOKEN=<api_token>", "export ROX_CENTRAL_ADDRESS=<address>:<port_number>", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output <cluster_init_bundle_name> cluster_init_bundle.yaml", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output-secrets <cluster_init_bundle_name> cluster_init_bundle.yaml", "oc create -f <init_bundle>.yaml \\ 1 -n <stackrox> 2", "kubectl create namespace stackrox 1 kubectl create -f <init_bundle>.yaml \\ 2 -n <stackrox> 3", "helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/", "helm search repo -l rhacs/", "customize: envVars: ENV_VAR1: \"value1\" ENV_VAR2: \"value2\"", "helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <name_of_cluster_init_bundle.yaml> -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \\ 1 --set imagePullSecrets.username=<username> \\ 2 --set imagePullSecrets.password=<password> 3", "helm install ... -f <(echo \"USDINIT_BUNDLE_YAML_SECRET\") 1", "helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --reuse-values \\ 1 -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}\"", "chmod +x roxctl", "echo USDPATH", "roxctl version", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}\"", "xattr -c roxctl", "chmod +x roxctl", "echo USDPATH", "roxctl version", "curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe", "roxctl version", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/sensor.sh", "roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/sensor.sh", "kubectl get pod -n stackrox -w" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/rhacs_cloud_service/setting-up-rhacs-cloud-service-with-kubernetes-secured-clusters
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/bare_metal_provisioning/making-open-source-more-inclusive
Chapter 3. Evaluating the model
Chapter 3. Evaluating the model If you want to measure the improvements of your new model, you can compare its performance to the base model with the evaluation process. You can also chat with the model directly to qualitatively identify whether the new model has learned the knowledge you created. If you want more quantitative results of the model improvements, you can run the evaluation process in the RHEL AI CLI. 3.1. Evaluating your new model You can run the evaluation process in the RHEL AI CLI with the following procedure. Prerequisites You installed RHEL AI with the bootable container image. You created a custom qna.yaml file with skills or knowledge. You ran the synthetic data generation process. You trained the model using the RHEL AI training process. You downloaded the prometheus-8x7b-v2-0 judge model. You have root user access on your machine. Procedure Navigate to your working Git branch where you created your qna.yaml file. You can now run the evaluation process on different benchmarks. Each command needs the path to the trained samples model to evaluate, you can access these checkpoints in your ~/.local/share/instructlab/checkpoints folder. MMLU_BRANCH benchmark - If you want to measure how your knowledge contributions have impacted your model, run the mmlu_branch benchmark by executing the following command: USD ilab model evaluate --benchmark mmlu_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> \ --tasks-dir ~/.local/share/instructlab/datasets/<node-dataset> \ --base-model ~/.cache/instructlab/models/granite-7b-starter where <checkpoint> Specify the best scored checkpoint file generated during multi-phase training <node-dataset> Specify the node_datasets directory, in the ~/.local/share/instructlab/datasets/ directory, with the same timestamps as the.jsonl files used for training the model. Example output # KNOWLEDGE EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab/ (0.74/1.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(0.78/1.0) ### IMPROVEMENTS (0.0 to 1.0): 1. tonsils: 0.74 -> 0.78 (+0.04) MT_BENCH_BRANCH benchmark - If you want to measure how your skills contributions have impacted your model, run the mt_bench_branch benchmark by executing the following command: USD ilab model evaluate \ --benchmark mt_bench_branch \ --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> \ --judge-model ~/.cache/instructlab/models/prometheus-8x7b-v2-0 \ --branch <worker-branch> \ --base-branch <worker-branch> where <checkpoint> Specify the best scored checkpoint file generated during multi-phase training. <worker-branch> Specify the branch you used when adding data to your taxonomy tree. <num-gpus> Specify the number of GPUs you want to use for evaluation. Example output # SKILL EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab (5.78/10.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(6.00/10.0) ### IMPROVEMENTS (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/object_identification/qna.yaml: 4.0 -> 6.67 (+2.67) 2. foundational_skills/reasoning/theory_of_mind/qna.yaml: 3.12 -> 4.0 (+0.88) 3. foundational_skills/reasoning/linguistics_reasoning/logical_sequence_of_words/qna.yaml: 9.33 -> 10.0 (+0.67) 4. foundational_skills/reasoning/logical_reasoning/tabular/qna.yaml: 5.67 -> 6.33 (+0.67) 5. foundational_skills/reasoning/common_sense_reasoning/qna.yaml: 1.67 -> 2.33 (+0.67) 6. foundational_skills/reasoning/logical_reasoning/causal/qna.yaml: 5.67 -> 6.0 (+0.33) 7. foundational_skills/reasoning/logical_reasoning/general/qna.yaml: 6.6 -> 6.8 (+0.2) 8. compositional_skills/writing/grounded/editing/content/qna.yaml: 6.8 -> 7.0 (+0.2) 9. compositional_skills/general/synonyms/qna.yaml: 4.5 -> 4.67 (+0.17) ### REGRESSIONS (0.0 to 10.0): 1. foundational_skills/reasoning/unconventional_reasoning/lower_score_wins/qna.yaml: 5.67 -> 4.0 (-1.67) 2. foundational_skills/reasoning/mathematical_reasoning/qna.yaml: 7.33 -> 6.0 (-1.33) 3. foundational_skills/reasoning/temporal_reasoning/qna.yaml: 5.67 -> 4.67 (-1.0) ### NO CHANGE (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/odd_one_out/qna.yaml (9.33) 2. compositional_skills/grounded/linguistics/inclusion/qna.yaml (6.5) Optional: You can manually evaluate each checkpoint using the MMLU and MT_BENCH benchmarks. You can evaluate any model against the standardized set of knowledge or skills, allowing you to compare the scores of your own model against other LLMs. MMLU - If you want to see the evaluation score of your new model against a standardized set of knowledge data, set the mmlu benchmark by running the following command: USD ilab model evaluate --benchmark mmlu --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665 where <checkpoint> Specify one of the checkpoint files generated during multi-phase training. Example output # KNOWLEDGE EVALUATION REPORT ## MODEL (SCORE) /home/user/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665 ### SCORES (0.0 to 1.0): mmlu_abstract_algebra - 0.31 mmlu_anatomy - 0.46 mmlu_astronomy - 0.52 mmlu_business_ethics - 0.55 mmlu_clinical_knowledge - 0.57 mmlu_college_biology - 0.56 mmlu_college_chemistry - 0.38 mmlu_college_computer_science - 0.46 ... MT_BENCH - If you want to see the evaluation score of your new model against a standardized set of skills, set the mt_bench benchmark by running the following command: USD ilab model evaluate --benchmark mt_bench --model ~/.local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665 where <checkpoint> Specify one of the checkpoint files generated during multi-phase training. Example output # SKILL EVALUATION REPORT ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(7.27/10.0) ### TURN ONE (0.0 to 10.0): 7.48 ### TURN TWO (0.0 to 10.0): 7.05
[ "ilab model evaluate --benchmark mmlu_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> --tasks-dir ~/.local/share/instructlab/datasets/<node-dataset> --base-model ~/.cache/instructlab/models/granite-7b-starter", "KNOWLEDGE EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab/ (0.74/1.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(0.78/1.0) ### IMPROVEMENTS (0.0 to 1.0): 1. tonsils: 0.74 -> 0.78 (+0.04)", "ilab model evaluate --benchmark mt_bench_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> --judge-model ~/.cache/instructlab/models/prometheus-8x7b-v2-0 --branch <worker-branch> --base-branch <worker-branch>", "SKILL EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab (5.78/10.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(6.00/10.0) ### IMPROVEMENTS (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/object_identification/qna.yaml: 4.0 -> 6.67 (+2.67) 2. foundational_skills/reasoning/theory_of_mind/qna.yaml: 3.12 -> 4.0 (+0.88) 3. foundational_skills/reasoning/linguistics_reasoning/logical_sequence_of_words/qna.yaml: 9.33 -> 10.0 (+0.67) 4. foundational_skills/reasoning/logical_reasoning/tabular/qna.yaml: 5.67 -> 6.33 (+0.67) 5. foundational_skills/reasoning/common_sense_reasoning/qna.yaml: 1.67 -> 2.33 (+0.67) 6. foundational_skills/reasoning/logical_reasoning/causal/qna.yaml: 5.67 -> 6.0 (+0.33) 7. foundational_skills/reasoning/logical_reasoning/general/qna.yaml: 6.6 -> 6.8 (+0.2) 8. compositional_skills/writing/grounded/editing/content/qna.yaml: 6.8 -> 7.0 (+0.2) 9. compositional_skills/general/synonyms/qna.yaml: 4.5 -> 4.67 (+0.17) ### REGRESSIONS (0.0 to 10.0): 1. foundational_skills/reasoning/unconventional_reasoning/lower_score_wins/qna.yaml: 5.67 -> 4.0 (-1.67) 2. foundational_skills/reasoning/mathematical_reasoning/qna.yaml: 7.33 -> 6.0 (-1.33) 3. foundational_skills/reasoning/temporal_reasoning/qna.yaml: 5.67 -> 4.67 (-1.0) ### NO CHANGE (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/odd_one_out/qna.yaml (9.33) 2. compositional_skills/grounded/linguistics/inclusion/qna.yaml (6.5)", "ilab model evaluate --benchmark mmlu --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665", "KNOWLEDGE EVALUATION REPORT ## MODEL (SCORE) /home/user/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665 ### SCORES (0.0 to 1.0): mmlu_abstract_algebra - 0.31 mmlu_anatomy - 0.46 mmlu_astronomy - 0.52 mmlu_business_ethics - 0.55 mmlu_clinical_knowledge - 0.57 mmlu_college_biology - 0.56 mmlu_college_chemistry - 0.38 mmlu_college_computer_science - 0.46", "ilab model evaluate --benchmark mt_bench --model ~/.local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665", "SKILL EVALUATION REPORT ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(7.27/10.0) ### TURN ONE (0.0 to 10.0): 7.48 ### TURN TWO (0.0 to 10.0): 7.05" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/generating_a_custom_llm_using_rhel_ai/evaluating_model
6.7. Insufficient Free Extents for a Logical Volume
6.7. Insufficient Free Extents for a Logical Volume You may get the error message "Insufficient free extents" when creating a logical volume when you think you have enough extents based on the output of the vgdisplay or vgs commands. This is because these commands round figures to 2 decimal places to provide human-readable output. To specify exact size, use free physical extent count instead of some multiple of bytes to determine the size of the logical volume. The vgdisplay command, by default, includes this line of output that indicates the free physical extents. Alternately, you can use the vg_free_count and vg_extent_count arguments of the vgs command to display the free extents and the total number of extents. With 8780 free physical extents, you can run the following command, using the lower-case l argument to use extents instead of bytes: This uses all the free extents in the volume group. Alternately, you can extend the logical volume to use a percentage of the remaining free space in the volume group by using the -l argument of the lvcreate command. For information, see Section 4.4.1.1, "Creating Linear Volumes" .
[ "vgdisplay --- Volume group --- Free PE / Size 8780 / 34.30 GB", "vgs -o +vg_free_count,vg_extent_count VG #PV #LV #SN Attr VSize VFree Free #Ext testvg 2 0 0 wz--n- 34.30G 34.30G 8780 8780", "lvcreate -l8780 -n testlv testvg", "vgs -o +vg_free_count,vg_extent_count VG #PV #LV #SN Attr VSize VFree Free #Ext testvg 2 1 0 wz--n- 34.30G 0 0 8780" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/nofreeext
4.7. Authentication
4.7. Authentication SSSD currently does not support eDirectory account lockout policies. when installing a replica (using the ipa-replica-install command), GSSAPI errors similar to the following might be returned: These messages can be safely ignored.
[ "[07/Apr/2011:10:46:23 -0400] slapi_ldap_bind - Error: could not perform interactive bind for id [] mech [GSSAPI]: error -2 (Local error) [07/Apr/2011:10:46:23 -0400] NSMMReplicationPlugin - agmt=\"cn=meToipaqa64vmb.testrelm\" (ipaqa64vmb:389): Replication bind with GSSAPI auth failed: LDAP error -2 (Local error) (SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Credentials cache file '/tmp/krb5cc_496' not found))" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/ar01s04s07
4.3. CONTROL/MONITORING
4.3. CONTROL/MONITORING The CONTROL/MONITORING Panel presents the a limited runtime status of LVS. It displays the status of the pulse daemon, the LVS routing table, and the LVS-spawned nanny processes. Note The fields for CURRENT LVS ROUTING TABLE and CURRENT LVS PROCESSES remain blank until you actually start LVS, as shown in Section 4.8, "Starting LVS" . Figure 4.2. The CONTROL/MONITORING Panel Auto update The status display on this page can be updated automatically at a user configurable interval. To enable this feature, click on the Auto update checkbox and set the desired update frequency in the Update frequency in seconds text box (the default value is 10 seconds). It is not recommended that you set the automatic update to an interval less than 10 seconds. Doing so may make it difficult to reconfigure the Auto update interval because the page will update too frequently. If you encounter this issue, simply click on another panel and then back on CONTROL/MONITORING . The Auto update feature does not work with all browsers, such as Mozilla . Update information now You can manually update the status information manually by clicking this button. CHANGE PASSWORD Clicking this button takes you to a help screen with information on how to change the administrative password for the Piranha Configuration Tool .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-piranha-ctrlmon-VSA
15.2. BIND
15.2. BIND This section covers BIND (Berkeley Internet Name Domain), the DNS server included in Red Hat Enterprise Linux. It focuses on the structure of its configuration files, and describes how to administer it both locally and remotely. 15.2.1. Empty Zones BIND configures a number of " empty zones " to prevent recursive servers from sending unnecessary queries to Internet servers that cannot handle them (thus creating delays and SERVFAIL responses to clients who query for them). These empty zones ensure that immediate and authoritative NXDOMAIN responses are returned instead. The configuration option empty-zones-enable controls whether or not empty zones are created, whilst the option disable-empty-zone can be used in addition to disable one or more empty zones from the list of default prefixes that would be used. The number of empty zones created for RFC 1918 prefixes has been increased, and users of BIND 9.9 and above will see the RFC 1918 empty zones both when empty-zones-enable is unspecified (defaults to yes ), and when it is explicitly set to yes . 15.2.2. Configuring the named Service When the named service is started, it reads the configuration from the files as described in Table 15.1, "The named Service Configuration Files" . Table 15.1. The named Service Configuration Files Path Description /etc/named.conf The main configuration file. /etc/named/ An auxiliary directory for configuration files that are included in the main configuration file. The configuration file consists of a collection of statements with nested options surrounded by opening and closing curly brackets ( { and } ). Note that when editing the file, you have to be careful not to make any syntax error, otherwise the named service will not start. A typical /etc/named.conf file is organized as follows: Note If you have installed the bind-chroot package, the BIND service will run in the chroot environment. In that case, the initialization script will mount the above configuration files using the mount --bind command, so that you can manage the configuration outside this environment. There is no need to copy anything into the /var/named/chroot/ directory because it is mounted automatically. This simplifies maintenance since you do not need to take any special care of BIND configuration files if it is run in a chroot environment. You can organize everything as you would with BIND not running in a chroot environment. The following directories are automatically mounted into the /var/named/chroot/ directory if the corresponding mount point directories underneath /var/named/chroot/ are empty: /etc/named /etc/pki/dnssec-keys /run/named /var/named /usr/lib64/bind or /usr/lib/bind (architecture dependent). The following files are also mounted if the target file does not exist in /var/named/chroot/ : /etc/named.conf /etc/rndc.conf /etc/rndc.key /etc/named.rfc1912.zones /etc/named.dnssec.keys /etc/named.iscdlv.key /etc/named.root.key Important Editing files which have been mounted in a chroot environment requires creating a backup copy and then editing the original file. Alternatively, use an editor with " edit-a-copy " mode disabled. For example, to edit the BIND's configuration file, /etc/named.conf , with Vim while it is running in a chroot environment, issue the following command as root : 15.2.2.1. Installing BIND in a chroot Environment To install BIND to run in a chroot environment, issue the following command as root : To enable the named-chroot service, first check if the named service is running by issuing the following command: If it is running, it must be disabled. To disable named , issue the following commands as root : Then, to enable the named-chroot service, issue the following commands as root : To check the status of the named-chroot service, issue the following command as root : 15.2.2.2. Common Statement Types The following types of statements are commonly used in /etc/named.conf : acl The acl (Access Control List) statement allows you to define groups of hosts, so that they can be permitted or denied access to the nameserver. It takes the following form: The acl-name statement name is the name of the access control list, and the match-element option is usually an individual IP address (such as 10.0.1.1 ) or a Classless Inter-Domain Routing ( CIDR ) network notation (for example, 10.0.1.0/24 ). For a list of already defined keywords, see Table 15.2, "Predefined Access Control Lists" . Table 15.2. Predefined Access Control Lists Keyword Description any Matches every IP address. localhost Matches any IP address that is in use by the local system. localnets Matches any IP address on any network to which the local system is connected. none Does not match any IP address. The acl statement can be especially useful in conjunction with other statements such as options . Example 15.2, "Using acl in Conjunction with Options" defines two access control lists, black-hats and red-hats , and adds black-hats on the blacklist while granting red-hats normal access. Example 15.2. Using acl in Conjunction with Options include The include statement allows you to include files in the /etc/named.conf , so that potentially sensitive data can be placed in a separate file with restricted permissions. It takes the following form: The file-name statement name is an absolute path to a file. Example 15.3. Including a File to /etc/named.conf options The options statement allows you to define global server configuration options as well as to set defaults for other statements. It can be used to specify the location of the named working directory, the types of queries allowed, and much more. It takes the following form: For a list of frequently used option directives, see Table 15.3, "Commonly Used Configuration Options" below. Table 15.3. Commonly Used Configuration Options Option Description allow-query Specifies which hosts are allowed to query the nameserver for authoritative resource records. It accepts an access control list, a collection of IP addresses, or networks in the CIDR notation. All hosts are allowed by default. allow-query-cache Specifies which hosts are allowed to query the nameserver for non-authoritative data such as recursive queries. Only localhost and localnets are allowed by default. blackhole Specifies which hosts are not allowed to query the nameserver. This option should be used when a particular host or network floods the server with requests. The default option is none . directory Specifies a working directory for the named service. The default option is /var/named/ . disable-empty-zone Used to disable one or more empty zones from the list of default prefixes that would be used. Can be specified in the options statement and also in view statements. It can be used multiple times. dnssec-enable Specifies whether to return DNSSEC related resource records. The default option is yes . dnssec-validation Specifies whether to prove that resource records are authentic through DNSSEC. The default option is yes . empty-zones-enable Controls whether or not empty zones are created. Can be specified only in the options statement. forwarders Specifies a list of valid IP addresses for nameservers to which the requests should be forwarded for resolution. forward Specifies the behavior of the forwarders directive. It accepts the following options: first - The server will query the nameservers listed in the forwarders directive before attempting to resolve the name on its own. only - When unable to query the nameservers listed in the forwarders directive, the server will not attempt to resolve the name on its own. listen-on Specifies the IPv4 network interface on which to listen for queries. On a DNS server that also acts as a gateway, you can use this option to answer queries originating from a single network only. All IPv4 interfaces are used by default. listen-on-v6 Specifies the IPv6 network interface on which to listen for queries. On a DNS server that also acts as a gateway, you can use this option to answer queries originating from a single network only. All IPv6 interfaces are used by default. max-cache-size Specifies the maximum amount of memory to be used for server caches. When the limit is reached, the server causes records to expire prematurely so that the limit is not exceeded. In a server with multiple views, the limit applies separately to the cache of each view. The default option is 32M . notify Specifies whether to notify the secondary nameservers when a zone is updated. It accepts the following options: yes - The server will notify all secondary nameservers. no - The server will not notify any secondary nameserver. master-only - The server will notify primary server for the zone only. explicit - The server will notify only the secondary servers that are specified in the also-notify list within a zone statement. pid-file Specifies the location of the process ID file created by the named service. recursion Specifies whether to act as a recursive server. The default option is yes . statistics-file Specifies an alternate location for statistics files. The /var/named/named.stats file is used by default. Note The directory used by named for runtime data has been moved from the BIND default location, /var/run/named/ , to a new location /run/named/ . As a result, the PID file has been moved from the default location /var/run/named/named.pid to the new location /run/named/named.pid . In addition, the session-key file has been moved to /run/named/session.key . These locations need to be specified by statements in the options section. See Example 15.4, "Using the options Statement" . Important To prevent distributed denial of service (DDoS) attacks, it is recommended that you use the allow-query-cache option to restrict recursive DNS services for a particular subset of clients only. See the BIND 9 Administrator Reference Manual referenced in Section 15.2.8.1, "Installed Documentation" , and the named.conf manual page for a complete list of available options. Example 15.4. Using the options Statement zone The zone statement allows you to define the characteristics of a zone, such as the location of its configuration file and zone-specific options, and can be used to override the global options statements. It takes the following form: The zone-name attribute is the name of the zone, zone-class is the optional class of the zone, and option is a zone statement option as described in Table 15.4, "Commonly Used Options in Zone Statements" . The zone-name attribute is particularly important, as it is the default value assigned for the USDORIGIN directive used within the corresponding zone file located in the /var/named/ directory. The named daemon appends the name of the zone to any non-fully qualified domain name listed in the zone file. For example, if a zone statement defines the namespace for example.com , use example.com as the zone-name so that it is placed at the end of host names within the example.com zone file. For more information about zone files, see Section 15.2.3, "Editing Zone Files" . Table 15.4. Commonly Used Options in Zone Statements Option Description allow-query Specifies which clients are allowed to request information about this zone. This option overrides global allow-query option. All query requests are allowed by default. allow-transfer Specifies which secondary servers are allowed to request a transfer of the zone's information. All transfer requests are allowed by default. allow-update Specifies which hosts are allowed to dynamically update information in their zone. The default option is to deny all dynamic update requests. Note that you should be careful when allowing hosts to update information about their zone. Do not set IP addresses in this option unless the server is in the trusted network. Instead, use TSIG key as described in Section 15.2.6.3, "Transaction SIGnatures (TSIG)" . file Specifies the name of the file in the named working directory that contains the zone's configuration data. masters Specifies from which IP addresses to request authoritative zone information. This option is used only if the zone is defined as type slave . notify Specifies whether to notify the secondary nameservers when a zone is updated. It accepts the following options: yes - The server will notify all secondary nameservers. no - The server will not notify any secondary nameserver. master-only - The server will notify primary server for the zone only. explicit - The server will notify only the secondary servers that are specified in the also-notify list within a zone statement. type Specifies the zone type. It accepts the following options: delegation-only - Enforces the delegation status of infrastructure zones such as COM, NET, or ORG. Any answer that is received without an explicit or implicit delegation is treated as NXDOMAIN . This option is only applicable in TLDs (Top-Level Domain) or root zone files used in recursive or caching implementations. forward - Forwards all requests for information about this zone to other nameservers. hint - A special type of zone used to point to the root nameservers which resolve queries when a zone is not otherwise known. No configuration beyond the default is necessary with a hint zone. master - Designates the nameserver as authoritative for this zone. A zone should be set as the master if the zone's configuration files reside on the system. slave - Designates the nameserver as a secondary server for this zone. Primary server is specified in the masters directive. Most changes to the /etc/named.conf file of a primary or secondary nameserver involve adding, modifying, or deleting zone statements, and only a small subset of zone statement options is usually needed for a nameserver to work efficiently. In Example 15.5, "A Zone Statement for a Primary nameserver" , the zone is identified as example.com , the type is set to master , and the named service is instructed to read the /var/named/example.com.zone file. It also allows only a secondary nameserver ( 192.168.0.2 ) to transfer the zone. Example 15.5. A Zone Statement for a Primary nameserver A secondary server's zone statement is slightly different. The type is set to slave , and the masters directive is telling named the IP address of the primary server. In Example 15.6, "A Zone Statement for a Secondary nameserver" , the named service is configured to query the primary server at the 192.168.0.1 IP address for information about the example.com zone. The received information is then saved to the /var/named/slaves/example.com.zone file. Note that you have to put all secondary zones in the /var/named/slaves/ directory, otherwise the service will fail to transfer the zone. Example 15.6. A Zone Statement for a Secondary nameserver 15.2.2.3. Other Statement Types The following types of statements are less commonly used in /etc/named.conf : controls The controls statement allows you to configure various security requirements necessary to use the rndc command to administer the named service. See Section 15.2.4, "Using the rndc Utility" for more information on the rndc utility and its usage. key The key statement allows you to define a particular key by name. Keys are used to authenticate various actions, such as secure updates or the use of the rndc command. Two options are used with key : algorithm algorithm-name - The type of algorithm to be used (for example, hmac-md5 ). secret " key-value " - The encrypted key. See Section 15.2.4, "Using the rndc Utility" for more information on the rndc utility and its usage. logging The logging statement allows you to use multiple types of logs, so called channels . By using the channel option within the statement, you can construct a customized type of log with its own file name ( file ), size limit ( size ), version number ( version ), and level of importance ( severity ). Once a customized channel is defined, a category option is used to categorize the channel and begin logging when the named service is restarted. By default, named sends standard messages to the rsyslog daemon, which places them in /var/log/messages . Several standard channels are built into BIND with various severity levels, such as default_syslog (which handles informational logging messages) and default_debug (which specifically handles debugging messages). A default category, called default , uses the built-in channels to do normal logging without any special configuration. Customizing the logging process can be a very detailed process and is beyond the scope of this chapter. For information on creating custom BIND logs, see the BIND 9 Administrator Reference Manual referenced in Section 15.2.8.1, "Installed Documentation" . server The server statement allows you to specify options that affect how the named service should respond to remote nameservers, especially with regard to notifications and zone transfers. The transfer-format option controls the number of resource records that are sent with each message. It can be either one-answer (only one resource record), or many-answers (multiple resource records). Note that while the many-answers option is more efficient, it is not supported by older versions of BIND. trusted-keys The trusted-keys statement allows you to specify assorted public keys used for secure DNS (DNSSEC). See Section 15.2.6.4, "DNS Security Extensions (DNSSEC)" for more information on this topic. view The view statement allows you to create special views depending upon which network the host querying the nameserver is on. This allows some hosts to receive one answer regarding a zone while other hosts receive totally different information. Alternatively, certain zones may only be made available to particular trusted hosts while non-trusted hosts can only make queries for other zones. Multiple views can be used as long as their names are unique. The match-clients option allows you to specify the IP addresses that apply to a particular view. If the options statement is used within a view, it overrides the already configured global options. Finally, most view statements contain multiple zone statements that apply to the match-clients list. Note that the order in which the view statements are listed is important, as the first statement that matches a particular client's IP address is used. For more information on this topic, see Section 15.2.6.1, "Multiple Views" . 15.2.2.4. Comment Tags Additionally to statements, the /etc/named.conf file can also contain comments. Comments are ignored by the named service, but can prove useful when providing additional information to a user. The following are valid comment tags: // Any text after the // characters to the end of the line is considered a comment. For example: # Any text after the # character to the end of the line is considered a comment. For example: /* and */ Any block of text enclosed in /* and */ is considered a comment. For example: 15.2.3. Editing Zone Files As outlined in Section 15.1.1, "Name server Zones" , zone files contain information about a namespace. They are stored in the named working directory located in /var/named/ by default. Each zone file is named according to the file option in the zone statement, usually in a way that relates to the domain in and identifies the file as containing zone data, such as example.com.zone . Table 15.5. The named Service Zone Files Path Description /var/named/ The working directory for the named service. The nameserver is not allowed to write to this directory. /var/named/slaves/ The directory for secondary zones. This directory is writable by the named service. /var/named/dynamic/ The directory for other files, such as dynamic DNS (DDNS) zones or managed DNSSEC keys. This directory is writable by the named service. /var/named/data/ The directory for various statistics and debugging files. This directory is writable by the named service. A zone file consists of directives and resource records. Directives tell the nameserver to perform tasks or apply special settings to the zone, resource records define the parameters of the zone and assign identities to individual hosts. While the directives are optional, the resource records are required in order to provide name service to a zone. All directives and resource records should be entered on individual lines. 15.2.3.1. Common Directives Directives begin with the dollar sign character ( USD ) followed by the name of the directive, and usually appear at the top of the file. The following directives are commonly used in zone files: USDINCLUDE The USDINCLUDE directive allows you to include another file at the place where it appears, so that other zone settings can be stored in a separate zone file. Example 15.7. Using the USDINCLUDE Directive USDORIGIN The USDORIGIN directive allows you to append the domain name to unqualified records, such as those with the host name only. Note that the use of this directive is not necessary if the zone is specified in /etc/named.conf , since the zone name is used by default. In Example 15.8, "Using the USDORIGIN Directive" , any names used in resource records that do not end in a trailing period (the . character) are appended with example.com . Example 15.8. Using the USDORIGIN Directive USDTTL The USDTTL directive allows you to set the default Time to Live (TTL) value for the zone, that is, how long is a zone record valid. Each resource record can contain its own TTL value, which overrides this directive. Increasing this value allows remote nameservers to cache the zone information for a longer period of time, reducing the number of queries for the zone and lengthening the amount of time required to propagate resource record changes. Example 15.9. Using the USDTTL Directive 15.2.3.2. Common Resource Records The following resource records are commonly used in zone files: A The Address record specifies an IP address to be assigned to a name. It takes the following form: If the hostname value is omitted, the record will point to the last specified hostname . In Example 15.10, "Using the A Resource Record" , the requests for server1.example.com are pointed to 10.0.1.3 or 10.0.1.5 . Example 15.10. Using the A Resource Record CNAME The Canonical Name record maps one name to another. Because of this, this type of record is sometimes referred to as an alias record . It takes the following form: CNAME records are most commonly used to point to services that use a common naming scheme, such as www for Web servers. However, there are multiple restrictions for their usage: CNAME records should not point to other CNAME records. This is mainly to avoid possible infinite loops. CNAME records should not contain other resource record types (such as A, NS, MX, and so on). The only exception are DNSSEC related records (RRSIG, NSEC, and so on) when the zone is signed. Other resource records that point to the fully qualified domain name (FQDN) of a host (NS, MX, PTR) should not point to a CNAME record. In Example 15.11, "Using the CNAME Resource Record" , the A record binds a host name to an IP address, while the CNAME record points the commonly used www host name to it. Example 15.11. Using the CNAME Resource Record MX The Mail Exchange record specifies where the mail sent to a particular namespace controlled by this zone should go. It takes the following form: The email-server-name is a fully qualified domain name (FQDN). The preference-value allows numerical ranking of the email servers for a namespace, giving preference to some email systems over others. The MX resource record with the lowest preference-value is preferred over the others. However, multiple email servers can possess the same value to distribute email traffic evenly among them. In Example 15.12, "Using the MX Resource Record" , the first mail.example.com email server is preferred to the mail2.example.com email server when receiving email destined for the example.com domain. Example 15.12. Using the MX Resource Record NS The Nameserver record announces authoritative nameservers for a particular zone. It takes the following form: The nameserver-name should be a fully qualified domain name (FQDN). Note that when two nameservers are listed as authoritative for the domain, it is not important whether these nameservers are secondary nameservers, or if one of them is a primary server. They are both still considered authoritative. Example 15.13. Using the NS Resource Record PTR The Pointer record points to another part of the namespace. It takes the following form: The last-IP-digit directive is the last number in an IP address, and the FQDN-of-system is a fully qualified domain name (FQDN). PTR records are primarily used for reverse name resolution, as they point IP addresses back to a particular name. See Section 15.2.3.4.2, "A Reverse Name Resolution Zone File" for examples of PTR records in use. SOA The Start of Authority record announces important authoritative information about a namespace to the nameserver. Located after the directives, it is the first resource record in a zone file. It takes the following form: The directives are as follows: The @ symbol places the USDORIGIN directive (or the zone's name if the USDORIGIN directive is not set) as the namespace being defined by this SOA resource record. The primary-name-server directive is the host name of the primary nameserver that is authoritative for this domain. The hostmaster-email directive is the email of the person to contact about the namespace. The serial-number directive is a numerical value incremented every time the zone file is altered to indicate it is time for the named service to reload the zone. The time-to-refresh directive is the numerical value secondary nameservers use to determine how long to wait before asking the primary nameserver if any changes have been made to the zone. The time-to-retry directive is a numerical value used by secondary nameservers to determine the length of time to wait before issuing a refresh request in the event that the primary nameserver is not answering. If the primary server has not replied to a refresh request before the amount of time specified in the time-to-expire directive elapses, the secondary servers stop responding as an authority for requests concerning that namespace. In BIND 4 and 8, the minimum-TTL directive is the amount of time other nameservers cache the zone's information. In BIND 9, it defines how long negative answers are cached for. Caching of negative answers can be set to a maximum of 3 hours ( 3H ). When configuring BIND, all times are specified in seconds. However, it is possible to use abbreviations when specifying units of time other than seconds, such as minutes ( M ), hours ( H ), days ( D ), and weeks ( W ). Table 15.6, "Seconds compared to other time units" shows an amount of time in seconds and the equivalent time in another format. Table 15.6. Seconds compared to other time units Seconds Other Time Units 60 1M 1800 30M 3600 1H 10800 3H 21600 6H 43200 12H 86400 1D 259200 3D 604800 1W 31536000 365D Example 15.14. Using the SOA Resource Record 15.2.3.3. Comment Tags Additionally to resource records and directives, a zone file can also contain comments. Comments are ignored by the named service, but can prove useful when providing additional information to the user. Any text after the semicolon character to the end of the line is considered a comment. For example: 15.2.3.4. Example Usage The following examples show the basic usage of zone files. 15.2.3.4.1. A Simple Zone File Example 15.15, "A simple zone file" demonstrates the use of standard directives and SOA values. Example 15.15. A simple zone file In this example, the authoritative nameservers are set as dns1.example.com and dns2.example.com , and are tied to the 10.0.1.1 and 10.0.1.2 IP addresses respectively using the A record. The email servers configured with the MX records point to mail and mail2 through A records. Since these names do not end in a trailing period, the USDORIGIN domain is placed after them, expanding them to mail.example.com and mail2.example.com . Services available at the standard names, such as www.example.com ( WWW ), are pointed at the appropriate servers using the CNAME record. This zone file would be called into service with a zone statement in the /etc/named.conf similar to the following: 15.2.3.4.2. A Reverse Name Resolution Zone File A reverse name resolution zone file is used to translate an IP address in a particular namespace into a fully qualified domain name (FQDN). It looks very similar to a standard zone file, except that the PTR resource records are used to link the IP addresses to a fully qualified domain name as shown in Example 15.16, "A reverse name resolution zone file" . Example 15.16. A reverse name resolution zone file In this example, IP addresses 10.0.1.1 through 10.0.1.6 are pointed to the corresponding fully qualified domain name. This zone file would be called into service with a zone statement in the /etc/named.conf file similar to the following: There is very little difference between this example and a standard zone statement, except for the zone name. Note that a reverse name resolution zone requires the first three blocks of the IP address reversed followed by .in-addr.arpa . This allows the single block of IP numbers used in the reverse name resolution zone file to be associated with the zone. 15.2.4. Using the rndc Utility The rndc utility is a command-line tool that allows you to administer the named service, both locally and from a remote machine. Its usage is as follows: 15.2.4.1. Configuring the Utility To prevent unauthorized access to the service, named must be configured to listen on the selected port ( 953 by default), and an identical key must be used by both the service and the rndc utility. Table 15.7. Relevant files Path Description /etc/named.conf The default configuration file for the named service. /etc/rndc.conf The default configuration file for the rndc utility. /etc/rndc.key The default key location. The rndc configuration is located in /etc/rndc.conf . If the file does not exist, the utility will use the key located in /etc/rndc.key , which was generated automatically during the installation process using the rndc-confgen -a command. The named service is configured using the controls statement in the /etc/named.conf configuration file as described in Section 15.2.2.3, "Other Statement Types" . Unless this statement is present, only the connections from the loopback address ( 127.0.0.1 ) will be allowed, and the key located in /etc/rndc.key will be used. For more information on this topic, see manual pages and the BIND 9 Administrator Reference Manual listed in Section 15.2.8, "Additional Resources" . Important To prevent unprivileged users from sending control commands to the service, make sure only root is allowed to read the /etc/rndc.key file: 15.2.4.2. Checking the Service Status To check the current status of the named service, use the following command: 15.2.4.3. Reloading the Configuration and Zones To reload both the configuration file and zones, type the following at a shell prompt: This will reload the zones while keeping all previously cached responses, so that you can make changes to the zone files without losing all stored name resolutions. To reload a single zone, specify its name after the reload command, for example: Finally, to reload the configuration file and newly added zones only, type: Note If you intend to manually modify a zone that uses Dynamic DNS (DDNS), make sure you run the freeze command first: Once you are finished, run the thaw command to allow the DDNS again and reload the zone: 15.2.4.4. Updating Zone Keys To update the DNSSEC keys and sign the zone, use the sign command. For example: Note that to sign a zone with the above command, the auto-dnssec option has to be set to maintain in the zone statement. For example: 15.2.4.5. Enabling the DNSSEC Validation To enable the DNSSEC validation, issue the following command as root : Similarly, to disable this option, type: See the options statement described in Section 15.2.2.2, "Common Statement Types" for information on how to configure this option in /etc/named.conf . The Red Hat Enterprise Linux 7 Security Guide has a comprehensive section on DNSSEC. 15.2.4.6. Enabling the Query Logging To enable (or disable in case it is currently enabled) the query logging, issue the following command as root : To check the current setting, use the status command as described in Section 15.2.4.2, "Checking the Service Status" . 15.2.5. Using the dig Utility The dig utility is a command-line tool that allows you to perform DNS lookups and debug a nameserver configuration. Its typical usage is as follows: See Section 15.2.3.2, "Common Resource Records" for a list of common values to use for type . 15.2.5.1. Looking Up a Nameserver To look up a nameserver for a particular domain, use the command in the following form: In Example 15.17, "A sample nameserver lookup" , the dig utility is used to display nameservers for example.com . Example 15.17. A sample nameserver lookup 15.2.5.2. Looking Up an IP Address To look up an IP address assigned to a particular domain, use the command in the following form: In Example 15.18, "A sample IP address lookup" , the dig utility is used to display the IP address of example.com . Example 15.18. A sample IP address lookup 15.2.5.3. Looking Up a Host Name To look up a host name for a particular IP address, use the command in the following form: In Example 15.19, "A Sample Host Name Lookup" , the dig utility is used to display the host name assigned to 192.0.32.10 . Example 15.19. A Sample Host Name Lookup 15.2.6. Advanced Features of BIND Most BIND implementations only use the named service to provide name resolution services or to act as an authority for a particular domain. However, BIND version 9 has a number of advanced features that allow for a more secure and efficient DNS service. Important Before attempting to use advanced features like DNSSEC, TSIG, or IXFR (Incremental Zone Transfer), make sure that the particular feature is supported by all nameservers in the network environment, especially when you use older versions of BIND or non-BIND servers. All of the features mentioned are discussed in greater detail in the BIND 9 Administrator Reference Manual referenced in Section 15.2.8.1, "Installed Documentation" . 15.2.6.1. Multiple Views Optionally, different information can be presented to a client depending on the network a request originates from. This is primarily used to deny sensitive DNS entries from clients outside of the local network, while allowing queries from clients inside the local network. To configure multiple views, add the view statement to the /etc/named.conf configuration file. Use the match-clients option to match IP addresses or entire networks and give them special options and zone data. 15.2.6.2. Incremental Zone Transfers (IXFR) Incremental Zone Transfers ( IXFR ) allow a secondary nameserver to only download the updated portions of a zone modified on a primary nameserver. Compared to the standard transfer process, this makes the notification and update process much more efficient. Note that IXFR is only available when using dynamic updating to make changes to primary zone records. If manually editing zone files to make changes, Automatic Zone Transfer ( AXFR ) is used. 15.2.6.3. Transaction SIGnatures (TSIG) Transaction SIGnatures (TSIG) ensure that a shared secret key exists on both primary and secondary nameservers before allowing a transfer. This strengthens the standard IP address-based method of transfer authorization, since attackers would not only need to have access to the IP address to transfer the zone, but they would also need to know the secret key. Since version 9, BIND also supports TKEY , which is another shared secret key method of authorizing zone transfers. Important When communicating over an insecure network, do not rely on IP address-based authentication only. 15.2.6.4. DNS Security Extensions (DNSSEC) Domain Name System Security Extensions ( DNSSEC ) provide origin authentication of DNS data, authenticated denial of existence, and data integrity. When a particular domain is marked as secure, the SERVFAIL response is returned for each resource record that fails the validation. Note that to debug a DNSSEC-signed domain or a DNSSEC-aware resolver, you can use the dig utility as described in Section 15.2.5, "Using the dig Utility" . Useful options are +dnssec (requests DNSSEC-related resource records by setting the DNSSEC OK bit), +cd (tells recursive nameserver not to validate the response), and +bufsize=512 (changes the packet size to 512B to get through some firewalls). 15.2.6.5. Internet Protocol version 6 (IPv6) Internet Protocol version 6 ( IPv6 ) is supported through the use of AAAA resource records, and the listen-on-v6 directive as described in Table 15.3, "Commonly Used Configuration Options" . 15.2.7. Common Mistakes to Avoid The following is a list of recommendations on how to avoid common mistakes users make when configuring a nameserver: Use semicolons and curly brackets correctly An omitted semicolon or unmatched curly bracket in the /etc/named.conf file can prevent the named service from starting. Use period (the . character) correctly In zone files, a period at the end of a domain name denotes a fully qualified domain name. If omitted, the named service will append the name of the zone or the value of USDORIGIN to complete it. Increment the serial number when editing a zone file If the serial number is not incremented, the primary nameserver will have the correct, new information, but the secondary nameservers will never be notified of the change, and will not attempt to refresh their data of that zone. Configure the firewall If a firewall is blocking connections from the named service to other nameservers, the recommended practice is to change the firewall settings. Warning Using a fixed UDP source port for DNS queries is a potential security vulnerability that could allow an attacker to conduct cache-poisoning attacks more easily. To prevent this, by default DNS sends from a random ephemeral port. Configure your firewall to allow outgoing queries from a random UDP source port. The range 1024 to 65535 is used by default. 15.2.8. Additional Resources The following sources of information provide additional resources regarding BIND. 15.2.8.1. Installed Documentation BIND features a full range of installed documentation covering many different topics, each placed in its own subject directory. For each item below, replace version with the version of the bind package installed on the system: /usr/share/doc/bind- version / The main directory containing the most recent documentation. The directory contains the BIND 9 Administrator Reference Manual in HTML and PDF formats, which details BIND resource requirements, how to configure different types of nameservers, how to perform load balancing, and other advanced topics. /usr/share/doc/bind- version /sample/etc/ The directory containing examples of named configuration files. rndc(8) The manual page for the rndc name server control utility, containing documentation on its usage. named(8) The manual page for the Internet domain name server named , containing documentation on assorted arguments that can be used to control the BIND nameserver daemon. lwresd(8) The manual page for the lightweight resolver daemon lwresd , containing documentation on the daemon and its usage. named.conf(5) The manual page with a comprehensive list of options available within the named configuration file. rndc.conf(5) The manual page with a comprehensive list of options available within the rndc configuration file. 15.2.8.2. Online Resources https://access.redhat.com/site/articles/770133 A Red Hat Knowledgebase article about running BIND in a chroot environment, including the differences compared to Red Hat Enterprise Linux 6. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/ The Red Hat Enterprise Linux 7 Security Guide has a comprehensive section on DNSSEC. https://www.icann.org/namecollision The ICANN FAQ on domain name collision .
[ "statement-1 [\" statement-1-name \"] [ statement-1-class ] { option-1 ; option-2 ; option-N ; }; statement-2 [\" statement-2-name \"] [ statement-2-class ] { option-1 ; option-2 ; option-N ; }; statement-N [\" statement-N-name \"] [ statement-N-class ] { option-1 ; option-2 ; option-N ; };", "~]# vim -c \"set backupcopy=yes\" /etc/named.conf", "~]# yum install bind-chroot", "~]USD systemctl status named", "~]# systemctl stop named", "~]# systemctl disable named", "~]# systemctl enable named-chroot", "~]# systemctl start named-chroot", "~]# systemctl status named-chroot", "acl acl-name { match-element ; };", "acl black-hats { 10.0.2.0/24; 192.168.0.0/24; 1234:5678::9abc/24; }; acl red-hats { 10.0.1.0/24; }; options { blackhole { black-hats; }; allow-query { red-hats; }; allow-query-cache { red-hats; }; };", "include \" file-name \"", "include \"/etc/named.rfc1912.zones\";", "options { option ; };", "options { allow-query { localhost; }; listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; max-cache-size 256M; directory \"/var/named\"; statistics-file \"/var/named/data/named_stats.txt\"; recursion yes; dnssec-enable yes; dnssec-validation yes; pid-file \"/run/named/named.pid\"; session-keyfile \"/run/named/session.key\"; };", "zone zone-name [ zone-class ] { option ; };", "zone \"example.com\" IN { type master; file \"example.com.zone\"; allow-transfer { 192.168.0.2; }; };", "zone \"example.com\" { type slave; file \"slaves/example.com.zone\"; masters { 192.168.0.1; }; };", "notify yes; // notify all secondary nameservers", "notify yes; # notify all secondary nameservers", "notify yes; /* notify all secondary nameservers */", "USDINCLUDE /var/named/penguin.example.com", "USDORIGIN example.com.", "USDTTL 1D", "hostname IN A IP-address", "server1 IN A 10.0.1.3 IN A 10.0.1.5", "alias-name IN CNAME real-name", "server1 IN A 10.0.1.5 www IN CNAME server1", "IN MX preference-value email-server-name", "example.com. IN MX 10 mail.example.com. IN MX 20 mail2.example.com.", "IN NS nameserver-name", "IN NS dns1.example.com. IN NS dns2.example.com.", "last-IP-digit IN PTR FQDN-of-system", "@ IN SOA primary-name-server hostmaster-email ( serial-number time-to-refresh time-to-retry time-to-expire minimum-TTL )", "@ IN SOA dns1.example.com. hostmaster.example.com. ( 2001062501 ; serial 21600 ; refresh after 6 hours 3600 ; retry after 1 hour 604800 ; expire after 1 week 86400 ) ; minimum TTL of 1 day", "604800 ; expire after 1 week", "USDORIGIN example.com. USDTTL 86400 @ IN SOA dns1.example.com. hostmaster.example.com. ( 2001062501 ; serial 21600 ; refresh after 6 hours 3600 ; retry after 1 hour 604800 ; expire after 1 week 86400 ) ; minimum TTL of 1 day ; ; IN NS dns1.example.com. IN NS dns2.example.com. dns1 IN A 10.0.1.1 IN AAAA aaaa:bbbb::1 dns2 IN A 10.0.1.2 IN AAAA aaaa:bbbb::2 ; ; @ IN MX 10 mail.example.com. IN MX 20 mail2.example.com. mail IN A 10.0.1.5 IN AAAA aaaa:bbbb::5 mail2 IN A 10.0.1.6 IN AAAA aaaa:bbbb::6 ; ; ; This sample zone file illustrates sharing the same IP addresses ; for multiple services: ; services IN A 10.0.1.10 IN AAAA aaaa:bbbb::10 IN A 10.0.1.11 IN AAAA aaaa:bbbb::11 ftp IN CNAME services.example.com. www IN CNAME services.example.com. ; ;", "zone \"example.com\" IN { type master; file \"example.com.zone\"; allow-update { none; }; };", "USDORIGIN 1.0.10.in-addr.arpa. USDTTL 86400 @ IN SOA dns1.example.com. hostmaster.example.com. ( 2001062501 ; serial 21600 ; refresh after 6 hours 3600 ; retry after 1 hour 604800 ; expire after 1 week 86400 ) ; minimum TTL of 1 day ; @ IN NS dns1.example.com. ; 1 IN PTR dns1.example.com. 2 IN PTR dns2.example.com. ; 5 IN PTR server1.example.com. 6 IN PTR server2.example.com. ; 3 IN PTR ftp.example.com. 4 IN PTR ftp.example.com.", "zone \"1.0.10.in-addr.arpa\" IN { type master; file \"example.com.rr.zone\"; allow-update { none; }; };", "rndc [ option ...] command [ command-option ]", "~]# chmod o-rwx /etc/rndc.key", "~]# rndc status version: 9.7.0-P2-RedHat-9.7.0-5.P2.el6 CPUs found: 1 worker threads: 1 number of zones: 16 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running", "~]# rndc reload server reload successful", "~]# rndc reload localhost zone reload up-to-date", "~]# rndc reconfig", "~]# rndc freeze localhost", "~]# rndc thaw localhost The zone reload and thaw was successful.", "~]# rndc sign localhost", "zone \"localhost\" IN { type master; file \"named.localhost\"; allow-update { none; }; auto-dnssec maintain; };", "~]# rndc validation on", "~]# rndc validation off", "~]# rndc querylog", "dig [@ server ] [ option ...] name type", "dig name NS", "~]USD dig example.com NS ; <<>> DiG 9.7.1-P2-RedHat-9.7.1-2.P2.fc13 <<>> example.com NS ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57883 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;example.com. IN NS ;; ANSWER SECTION: example.com. 99374 IN NS a.iana-servers.net. example.com. 99374 IN NS b.iana-servers.net. ;; Query time: 1 msec ;; SERVER: 10.34.255.7#53(10.34.255.7) ;; WHEN: Wed Aug 18 18:04:06 2010 ;; MSG SIZE rcvd: 77", "dig name A", "~]USD dig example.com A ; <<>> DiG 9.7.1-P2-RedHat-9.7.1-2.P2.fc13 <<>> example.com A ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4849 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0 ;; QUESTION SECTION: ;example.com. IN A ;; ANSWER SECTION: example.com. 155606 IN A 192.0.32.10 ;; AUTHORITY SECTION: example.com. 99175 IN NS a.iana-servers.net. example.com. 99175 IN NS b.iana-servers.net. ;; Query time: 1 msec ;; SERVER: 10.34.255.7#53(10.34.255.7) ;; WHEN: Wed Aug 18 18:07:25 2010 ;; MSG SIZE rcvd: 93", "dig -x address", "~]USD dig -x 192.0.32.10 ; <<>> DiG 9.7.1-P2-RedHat-9.7.1-2.P2.fc13 <<>> -x 192.0.32.10 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29683 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 6 ;; QUESTION SECTION: ;10.32.0.192.in-addr.arpa. IN PTR ;; ANSWER SECTION: 10.32.0.192.in-addr.arpa. 21600 IN PTR www.example.com. ;; AUTHORITY SECTION: 32.0.192.in-addr.arpa. 21600 IN NS b.iana-servers.org. 32.0.192.in-addr.arpa. 21600 IN NS c.iana-servers.net. 32.0.192.in-addr.arpa. 21600 IN NS d.iana-servers.net. 32.0.192.in-addr.arpa. 21600 IN NS ns.icann.org. 32.0.192.in-addr.arpa. 21600 IN NS a.iana-servers.net. ;; ADDITIONAL SECTION: a.iana-servers.net. 13688 IN A 192.0.34.43 b.iana-servers.org. 5844 IN A 193.0.0.236 b.iana-servers.org. 5844 IN AAAA 2001:610:240:2::c100:ec c.iana-servers.net. 12173 IN A 139.91.1.10 c.iana-servers.net. 12173 IN AAAA 2001:648:2c30::1:10 ns.icann.org. 12884 IN A 192.0.34.126 ;; Query time: 156 msec ;; SERVER: 10.34.255.7#53(10.34.255.7) ;; WHEN: Wed Aug 18 18:25:15 2010 ;; MSG SIZE rcvd: 310" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-BIND
Chapter 1. Operators overview
Chapter 1. Operators overview Operators are among the most important components of OpenShift Container Platform. They are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and the OpenShift CLI ( oc ). They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. Operators are designed specifically for Kubernetes-native applications to implement and automate common Day 1 operations, such as installation and configuration. Operators can also automate Day 2 operations, such as autoscaling up or down and creating backups. All of these activities are directed by a piece of software running on your cluster. While both follow similar Operator concepts and goals, Operators in OpenShift Container Platform are managed by two different systems, depending on their purpose: Cluster Operators Managed by the Cluster Version Operator (CVO) and installed by default to perform cluster functions. Optional add-on Operators Managed by Operator Lifecycle Manager (OLM) and can be made accessible for users to run in their applications. Also known as OLM-based Operators . 1.1. For developers As an Operator author, you can perform the following development tasks for OLM-based Operators: Install Operator SDK CLI . Create Go-based Operators , Ansible-based Operators , and Helm-based Operators . Use Operator SDK to build, test, and deploy an Operator . Install and subscribe an Operator to your namespace . Create an application from an installed Operator through the web console . Additional resources Machine deletion lifecycle hook examples for Operator developers 1.2. For administrators As a cluster administrator, you can perform the following administrative tasks for OLM-based Operators: Manage custom catalogs . Allow non-cluster administrators to install Operators . Install an Operator from OperatorHub . View Operator status . Manage Operator conditions . Upgrade installed Operators . Delete installed Operators . Configure proxy support . Using Operator Lifecycle Manager in disconnected environments . For information about the cluster Operators that Red Hat provides, see Cluster Operators reference . 1.3. steps To understand more about Operators, see What are Operators?
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operators/operators-overview
20.11. CPU Model and Topology
20.11. CPU Model and Topology This section covers the requirements for CPU model. Note that every hypervisor has its own policy for which CPU features guest will see by default. The set of CPU features presented to the guest by QEMU/KVM depends on the CPU model chosen in the guest virtual machine configuration. qemu32 and qemu64 are basic CPU models but there are other models (with additional features) available. Each model and its topology is specified using the following elements from the domain XML: <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='disable' name='lahf_lm'/> </cpu> Figure 20.13. CPU model and topology example 1 <cpu mode='host-model'> <model fallback='forbid'/> <topology sockets='1' cores='2' threads='1'/> </cpu> Figure 20.14. CPU model and topology example 2 <cpu mode='host-passthrough'/> Figure 20.15. CPU model and topology example 3 In cases where no restrictions are to be put on either the CPU model nor its features, a simpler cpu element such as the following may be used. <cpu> <topology sockets='1' cores='2' threads='1'/> </cpu> Figure 20.16. CPU model and topology example 4 The components of this section of the domain XML are as follows: Table 20.9. CPU model and topology elements Element Description <cpu> This element contains all parameters for the vCPU feature set. <match> Specifies how closely the features indicated in the <cpu> element must match the vCPUs that are available. The match attribute can be omitted if <topology> is the only element nested in the <cpu> element. Possible values for the match attribute are: minimum - The features listed are the minimum requirement. There may be more features available in the vCPU then are indicated, but this is the minimum that will be accepted. This value will fail if the minimum requirements are not met. exact - the virtual CPU provided to the guest virtual machine must exactly match the features specified. If no match is found, an error will result. strict - the guest virtual machine will not be created unless the host physical machine CPU exactly matches the specification. If the match attribute is omitted from the <cpu> element, the default setting match='exact' is used. <mode> This optional attribute may be used to make it easier to configure a guest virtual machine CPU to be as close to the host physical machine CPU as possible. Possible values for the mode attribute are: custom - describes how the CPU is presented to the guest virtual machine. This is the default setting when the mode attribute is not specified. This mode makes it so that a persistent guest virtual machine will see the same hardware no matter what host physical machine the guest virtual machine is booted on. host-model - this is essentially a shortcut to copying host physical machine CPU definition from the capabilities XML into the domain XML. As the CPU definition is copied just before starting a domain, the same XML can be used on different host physical machines while still providing the best guest virtual machine CPU each host physical machine supports. Neither the match attribute nor any feature elements can be used in this mode. For more information see libvirt domain XML CPU models host-passthrough With this mode, the CPU visible to the guest virtual machine is exactly the same as the host physical machine CPU including elements that cause errors within libvirt. The obvious the downside of this mode is that the guest virtual machine environment cannot be reproduced on different hardware and therefore this mode is recommended with great caution. Neither model nor feature elements are allowed in this mode. Note that in both host-model and host-passthrough mode, the real (approximate in host-passthrough mode) CPU definition which would be used on current host physical machine can be determined by specifying VIR_DOMAIN_XML_UPDATE_CPU flag when calling virDomainGetXMLDesc API. When running a guest virtual machine that might be prone to operating system reactivation when presented with different hardware, and which will be migrated between host physical machines with different capabilities, you can use this output to rewrite XML to the custom mode for more robust migration. <model> Specifies CPU model requested by the guest virtual machine. The list of available CPU models and their definition can be found in cpu_map.xml file installed in libvirt's data directory. If a hypervisor is not able to use the exact CPU model, libvirt automatically falls back to a closest model supported by the hypervisor while maintaining the list of CPU features. An optional fallback attribute can be used to forbid this behavior, in which case an attempt to start a domain requesting an unsupported CPU model will fail. Supported values for fallback attribute are: allow (this is the default), and forbid . The optional vendor_id attribute can be used to set the vendor id seen by the guest virtual machine. It must be exactly 12 characters long. If not set, the vendor id of the host physical machine is used. Typical possible values are AuthenticAMD and GenuineIntel . <vendor> Specifies CPU vendor requested by the guest virtual machine. If this element is missing, the guest virtual machine runs on a CPU matching given features regardless of its vendor. The list of supported vendors can be found in cpu_map.xml . <topology> Specifies requested topology of virtual CPU provided to the guest virtual machine. Three non-zero values have to be given for sockets, cores, and threads: total number of CPU sockets, number of cores per socket, and number of threads per core, respectively. <feature> Can contain zero or more elements used to fine-tune features provided by the selected CPU model. The list of known feature names can be found in the same file as CPU models. The meaning of each feature element depends on its policy attribute, which has to be set to one of the following values: force - forces the virtual to be supported regardless of whether it is actually supported by host physical machine CPU. require - dictates that guest virtual machine creation will fail unless the feature is supported by host physical machine CPU. This is the default setting optional - this feature is supported by virtual CPU but and only if it is supported by host physical machine CPU. disable - this is not supported by virtual CPU. forbid - guest virtual machine creation will fail if the feature is supported by host physical machine CPU. 20.11.1. Guest virtual machine NUMA topology Guest virtual machine NUMA topology can be specified using the <numa> element and the following from the domain XML: <cpu> <numa> <cell cpus='0-3' memory='512000'/> <cell cpus='4-7' memory='512000'/> </numa> </cpu> ... Figure 20.17. Guest Virtual Machine NUMA Topology Each cell element specifies a NUMA cell or a NUMA node. cpus specifies the CPU or range of CPUs that are part of the node. memory specifies the node memory in kibibytes (blocks of 1024 bytes). Each cell or node is assigned cellid or nodeid in increasing order starting from 0.
[ "<cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='disable' name='lahf_lm'/> </cpu>", "<cpu mode='host-model'> <model fallback='forbid'/> <topology sockets='1' cores='2' threads='1'/> </cpu>", "<cpu mode='host-passthrough'/>", "<cpu> <topology sockets='1' cores='2' threads='1'/> </cpu>", "<cpu> <numa> <cell cpus='0-3' memory='512000'/> <cell cpus='4-7' memory='512000'/> </numa> </cpu>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-libvirt-dom-xml-cpu-model-top
Chapter 5. Developer previews
Chapter 5. Developer previews This section describes the developer preview features introduced in Red Hat OpenShift Data Foundation 4.15. Important Developer preview feature is subject to Developer preview support limitations. Developer preview releases are not intended to be run in production environments. The clusters deployed with the developer preview features are considered to be development clusters and are not supported through the Red Hat Customer Portal case management system. If you need assistance with developer preview features, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. 5.1. Multicloud Object Gateway support STS for clients Multicloud Object Gateway (MCG) provides support to a security token service (STS) similar to the one provided by Amazon Web Services. To allow other users to assume the role of a certain user, it is necessary to assign a role configuration to the user using MCG command line interface. For more information, see the knowledgebase article, Use the Multi-Cloud Object Gateway's Security Token Service to assume the role of another user . 5.2. Support RADOS namespace for external mode The RADOS block device (RBD) storage class created in the OpenShift Data Foundation cluster uses a namespace for provisioning storage instead of the complete pool. The newly created namespace has restricted permissions. For more information, see the knowledgebase article, Adding RADOS namespace for external mode cluster . 5.3. OpenShift Data Foundation deployed across three vSphere clusters with vSphere IPI OpenShift Data Foundation supports OpenShift deployment stretched across vSphere installer provisioned infrastructure clusters managed by one vCenter. This support enables you to deploy OpenShift Container Platform and OpenShift Data Foundation across Availability Zones (AZ) with each replica having affinity to an AZ. This helps to survive failure of any single zone as a minimum of three zones are required for the deployment. For more information, see Installing a cluster on vSphere with customizations . 5.4. User capabilities for CephObjectStoreUser With this release, user capabilities (caps) for RADOS gateway (RGW) by using CephObjectStore CRD is supported. Enabling these caps such as user, bucket, and so on gives administrator like capabilities through REST API similar to radosgw-admin commands. For more information, see the knowledgeabase article, User capabilities in CephObjectStoreUser . 5.5. Ceph-CSI built-in capability to find and clean stale subvolumes OpenShift Data Foundation 4.15 introduces an inbuilt script to delete stale volumes on a OpenShift Data Foundation cluster that have RADOS block device (RBD) images or CephFS subvolumes without the parent PVC. For more information, see the knowledgebase article, Listing and cleaning stale subvolumes . 5.6. Complete bucket policy elements in Multicloud Object Gateway With this release, bucket policies can be updated to allow lists in Multicloud Object Gateway. For example, policy definition created for a bucket can be such that read access is granted to all the directories whereas only one specific directory has the write access. For more information, see the knowledgebase article, Support for additional elements to the S3 BucketPolicy in Multicloud Object Gateway . 5.7. Recovery to replacement Cluster with Regional DR When there is a failure with the primary cluster, options are either to repair or wait for the recovery of the existing cluster or replace the cluster entirely if the cluster is irredeemable. The failed primary cluster can be replaced with a new cluster and fallback (relocate) can be enabled to this new cluster. For more information, contact Red Hat Customer Support . 5.8. Support IPv6 for external mode With this release, IPv6 is supported in Openshift Data foundation external mode deployments.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/4.15_release_notes/developer_previews
Chapter 3. Running the JBoss Server Migration Tool
Chapter 3. Running the JBoss Server Migration Tool You can run the JBoss Server Migration Tool in either of the following ways. Interactive mode : This mode, which is the default, allows you to choose exactly which configurations you want to migrate. Non-interactive mode : This mode allows you to run the tool without prompts. Important You must stop both the source and the target JBoss EAP servers before you run the JBoss Server Migration Tool. 3.1. Run the JBoss Server Migration Tool in interactive mode By default, the JBoss Server Migration Tool runs interactively. This mode allows you to choose exactly which server configurations you want to migrate. Note Interactive mode does not allow you to choose which subsystems to migrate. For information on how to configure the tool at the subsystem or task level, see Configure the migration tasks performed by the JBoss Server Migration Tool . The following are the basic steps that are performed for a minimal migration. If the server from which you are migrating includes custom configurations, for example deployments, or if it is missing default resources, the tool provides additional prompts. Procedure To run the tool in interactive mode, navigate to the target server installation directory and run the following command, providing the source argument as the path to the source server installation. You are prompted to determine if you want to migrate the source server's standalone configurations, which are located in the EAP_PREVIOUS_HOME /standalone/configuration/ directory, to the target server's standalone configurations, which are located in the EAP_NEW_HOME /standalone/configuration/ directory. If you respond with no , standalone server migration is skipped and no standalone server configuration files are migrated. If you respond with yes , you see the following prompt. Respond with yes to migrate all of the source server's standalone server configuration files. Respond with no to receive a prompt for each individual standalone*.xml configuration file. , you are prompted to determine if you want to migrate the source server's managed domain configurations, which are located in the EAP_PREVIOUS_HOME /domain/configuration/ directory, to the target server's managed domain configurations, which are located in the EAP_NEW_HOME /domain/configuration/ directory. If you respond with no , managed domain migration is skipped and no managed domain configuration files are migrated. If you respond with yes , the tool begins migrating the managed domain content of the source server. A ciphered repository is used to store data, such as deployments and deployment overlays, that are referenced by the source server's managed domain and host configurations. Because the source and target servers use a similar content repository, the tool simply copies the data from the source server to the target server and prints the results to the console and the server log. , the migration tool scans the source server for managed domain configuration files, prints the results to the console, and provides the following prompt. Respond with yes to migrate all of the source server's managed domain configuration files. Respond with no to receive a prompt for each individual managed domain configuration file. , the migration tool scans the source server for host configurations files, prints the results to the console, and provides the following prompt. Respond with yes to migrate all of the source server's host configuration files. Respond with no to receive a prompt for each individual host configuration file. Upon completion, you should see the following message in the server console. 3.2. Run the JBoss Server Migration Tool in non-interactive mode You can run the JBoss Server Migration Tool in non-interactive mode. This mode allows it to run without prompts. Note The JBoss Server Migration Tool automatically migrates all subsystem configurations for all server configuration files. For information on how to configure the tool at the subsystem or task level, see Configure the migration tasks performed by the JBoss Server Migration Tool . Procedure To run the tool in non-interactive mode, navigate to the target server installation directory and run the following command, providing the source argument as the path to the source server installation and setting the --interactive or -i argument to false . By default, the tool automatically migrates all of the source server's standalone and managed domain configuration files. However, you can configure the tool's properties to skip migration of specific configurations. Upon completion, you should see the following message in the server console.
[ "MIGRATION_TOOL_HOME/bin/jboss-server-migration.sh --source EAP_PREVIOUS_HOME --target EAP_NEW_HOME", "Migrate the source's standalone server? yes/no? yes", "Migrate all configurations? yes/no? yes", "Migrate the source's managed domain? yes/no? yes", "INFO [ServerMigrationTask#397] Migrating domain content found: [22/caa450a9ba3b84eaf5a15b6da418b92ce6c98e/content, 23/b62a37ba8a4830622bfcdb960280577cc6796e/content] INFO [ServerMigrationTask#398] Resource with path /EAP_NEW_HOME/domain/data/content/22/caa450a9ba3b84eaf5a15b6da418b92ce6c98e/content migrated. INFO [ServerMigrationTask#399] Resource with path /EAP_NEW_HOME/domain/data/content/23/b62a37ba8a4830622bfcdb960280577cc6796e/content migrated.", "Migrate all configurations? yes/no? yes", "INFO [ServerMigrationTask#457] Retrieving source's host configurations INFO [ServerMigrationTask#457] /jboss-eap-8.0/domain/configuration/host-master.xml INFO [ServerMigrationTask#457] /jboss-eap-8.0/domain/configuration/host-slave.xml INFO [ServerMigrationTask#457] /jboss-eap-8.0/domain/configuration/host.xml Migrate all configurations? yes/no? yes", "Migration Result: SUCCESS", "MIGRATION_TOOL_HOME/bin/jboss-server-migration.sh --source EAP_PREVIOUS_HOME --target EAP_NEW_HOME --interactive false", "Migration Result: SUCCESS" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_the_jboss_server_migration_tool/assembly_run-server-migration-tool_server-migration-tool
Chapter 7. Configuring physical switches for OpenStack Networking
Chapter 7. Configuring physical switches for OpenStack Networking This chapter documents the common physical switch configuration steps required for OpenStack Networking. Vendor-specific configuration is included for certain switches. 7.1. Planning your physical network environment The physical network adapters in your OpenStack nodes carry different types of network traffic, such as instance traffic, storage data, or authentication requests. The type of traffic these NICs carry affects how you must configure the ports on the physical switch. First, you must decide which physical NICs oFn your Compute node you want to carry which types of traffic. Then, when the NIC is cabled to a physical switch port, you must configure the switch port to allow trunked or general traffic. For example, the following diagram depicts a Compute node with two NICs, eth0 and eth1. Each NIC is cabled to a Gigabit Ethernet port on a physical switch, with eth0 carrying instance traffic, and eth1 providing connectivity for OpenStack services: Figure 7.1. Sample network layout Note This diagram does not include any additional redundant NICs required for fault tolerance. Additional resources Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment guide. 7.2. Configuring a Cisco Catalyst switch 7.2.1. About trunk ports With OpenStack Networking you can connect instances to the VLANs that already exist on your physical network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the 8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch. 7.2.2. Configuring trunk ports for a Cisco Catalyst switch If using a Cisco Catalyst switch running Cisco IOS, you might use the following configuration syntax to allow traffic for VLANs 110 and 111 to pass through to your instances. This configuration assumes that your physical node has an ethernet cable connected to interface GigabitEthernet1/0/12 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Use the following list to understand these parameters: Field Description interface GigabitEthernet1/0/12 The switch port that the NIC of the X node connects to. Ensure that you replace the GigabitEthernet1/0/12 value with the correct port value for your environment. Use the show interface command to view a list of ports. description Trunk to Compute Node A unique and descriptive value that you can use to identify this interface. spanning-tree portfast trunk If your environment uses STP, set this value to instruct Port Fast that this port is used to trunk traffic. switchport trunk encapsulation dot1q Enables the 802.1q trunking standard (rather than ISL). This value varies depending on the configuration that your switch supports. switchport mode trunk Configures this port as a trunk port, rather than an access port, meaning that it allows VLAN traffic to pass through to the virtual switches. switchport trunk native vlan 2 Set a native VLAN to instruct the switch where to send untagged (non-VLAN) traffic. switchport trunk allowed vlan 2,110,111 Defines which VLANs are allowed through the trunk. 7.2.3. About access ports Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other operational requirements, such as transporting management traffic or Block Storage data. These ports are commonly known as access ports and usually require a simpler configuration than trunk ports. 7.2.4. Configuring access ports for a Cisco Catalyst switch Using the example from the Figure 7.1, "Sample network layout" diagram, GigabitEthernet1/0/13 (on a Cisco Catalyst switch) is configured as an access port for eth1 . In this configuration,your physical node has an ethernet cable connected to interface GigabitEthernet1/0/12 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. These settings are described below: Field Description interface GigabitEthernet1/0/13 The switch port that the NIC of the X node connects to. Ensure that you replace the GigabitEthernet1/0/12 value with the correct port value for your environment. Use the show interface command to view a list of ports. description Access port for Compute Node A unique and descriptive value that you can use to identify this interface. switchport mode access Configures this port as an access port, rather than a trunk port. switchport access vlan 200 Configures the port to allow traffic on VLAN 200. You must configure your Compute node with an IP address from this VLAN. spanning-tree portfast If using STP, set this value to instruct STP not to attempt to initialize this as a trunk, allowing for quicker port handshakes during initial connections (such as server reboot). 7.2.5. About LACP port aggregation You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends: on the physical NICs, and on the physical switch ports. Additional resources Network Interface Bonding in the Installing and managing Red Hat OpenStack Platform with director guide. 7.2.6. Configuring LACP on the physical NIC You can configure Link Aggregation Control Protocol (LACP) on a physical NIC. Procedure Edit the /home/stack/network-environment.yaml file: Configure the Open vSwitch bridge to use LACP: Additional resources Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment guide. 7.2.7. Configuring LACP for a Cisco Catalyst switch In this example, the Compute node has two NICs using VLAN 100: Procedure Physically connect both NICs on the Compute node to the switch (for example, ports 12 and 13). Create the LACP port channel: Configure switch ports 12 (Gi1/0/12) and 13 (Gi1/0/13): Review your new port channel. The resulting output lists the new port-channel Po1 , with member ports Gi1/0/12 and Gi1/0/13 : Note Remember to apply your changes by copying the running-config to the startup-config: copy running-config startup-config . 7.2.8. About MTU settings You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000 bytes) are required for certain NFS or iSCSI traffic. Note You must change MTU settings from end-to-end on all hops that the traffic is expected to pass through, including any virtual switches. Additional resources Configuring maximum transmission unit (MTU) settings 7.2.9. Configuring MTU settings for a Cisco Catalyst switch Complete the steps in this example procedure to enable jumbo frames on your Cisco Catalyst 3750 switch. Review the current MTU settings: MTU settings are changed switch-wide on 3750 switches, and not for individual interfaces. Run the following commands to configure the switch to use jumbo frames of 9000 bytes. You might prefer to configure the MTU settings for individual interfaces, if your switch supports this feature. Note Remember to save your changes by copying the running-config to the startup-config: copy running-config startup-config . Reload the switch to apply the change. Important Reloading the switch causes a network outage for any devices that are dependent on the switch. Therefore, reload the switch only during a scheduled maintenance period. After the switch reloads, confirm the new jumbo MTU size. The exact output may differ depending on your switch model. For example, System MTU might apply to non-Gigabit interfaces, and Jumbo MTU might describe all Gigabit interfaces. 7.2.10. About LLDP discovery The ironic-python-agent service listens for LLDP packets from connected switches. The collected information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection process. 7.2.11. Configuring LLDP for a Cisco Catalyst switch Procedure Run the lldp run command to enable LLDP globally on your Cisco Catalyst switch: View any neighboring LLDP-compatible devices: Note Remember to save your changes by copying the running-config to the startup-config: copy running-config startup-config . 7.3. Configuring a Cisco Nexus switch 7.3.1. About trunk ports With OpenStack Networking you can connect instances to the VLANs that already exist on your physical network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the 8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch. 7.3.2. Configuring trunk ports for a Cisco Nexus switch If using a Cisco Nexus you might use the following configuration syntax to allow traffic for VLANs 110 and 111 to pass through to your instances. This configuration assumes that your physical node has an ethernet cable connected to interface Ethernet1/12 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. 7.3.3. About access ports Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other operational requirements, such as transporting management traffic or Block Storage data. These ports are commonly known as access ports and usually require a simpler configuration than trunk ports. 7.3.4. Configuring access ports for a Cisco Nexus switch Procedure Using the example from the Figure 7.1, "Sample network layout" diagram, Ethernet1/13 (on a Cisco Nexus switch) is configured as an access port for eth1 . This configuration assumes that your physical node has an ethernet cable connected to interface Ethernet1/13 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. 7.3.5. About LACP port aggregation You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends: on the physical NICs, and on the physical switch ports. Additional resources Network Interface Bonding in the Installing and managing Red Hat OpenStack Platform with director guide. 7.3.6. Configuring LACP on the physical NIC You can configure Link Aggregation Control Protocol (LACP) on a physical NIC. Procedure Edit the /home/stack/network-environment.yaml file: Configure the Open vSwitch bridge to use LACP: Additional resources Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment guide. 7.3.7. Configuring LACP for a Cisco Nexus switch In this example, the Compute node has two NICs using VLAN 100: Procedure Physically connect the Compute node NICs to the switch (for example, ports 12 and 13). Confirm that LACP is enabled: Configure ports 1/12 and 1/13 as access ports, and as members of a channel group. Depending on your deployment, you can deploy trunk interfaces rather than access interfaces. For example, for Cisco UCI the NICs are virtual interfaces, so you might prefer to configure access ports exclusively. Often these interfaces contain VLAN tagging configurations. Note When you use PXE to provision nodes on Cisco switches, you might need to set the options no lacp graceful-convergence and no lacp suspend-individual to bring up the ports and boot the server. For more information, see your Cisco switch documentation. 7.3.8. About MTU settings You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000 bytes) are required for certain NFS or iSCSI traffic. Note You must change MTU settings from end-to-end on all hops that the traffic is expected to pass through, including any virtual switches. Additional resources Configuring maximum transmission unit (MTU) settings 7.3.9. Configuring MTU settings for a Cisco Nexus 7000 switch Apply MTU settings to a single interface on 7000-series switches. Procedure Run the following commands to configure interface 1/12 to use jumbo frames of 9000 bytes: 7.3.10. About LLDP discovery The ironic-python-agent service listens for LLDP packets from connected switches. The collected information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection process. 7.3.11. Configuring LLDP for a Cisco Nexus 7000 switch Procedure You can enable LLDP for individual interfaces on Cisco Nexus 7000-series switches: Note Remember to save your changes by copying the running-config to the startup-config: copy running-config startup-config . 7.4. Configuring a Cumulus Linux switch 7.4.1. About trunk ports With OpenStack Networking you can connect instances to the VLANs that already exist on your physical network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the 8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch. 7.4.2. Configuring trunk ports for a Cumulus Linux switch This configuration assumes that your physical node has transceivers connected to switch ports swp1 and swp2 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Procedure Use the following configuration syntax to allow traffic for VLANs 100 and 200 to pass through to your instances. 7.4.3. About access ports Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other operational requirements, such as transporting management traffic or Block Storage data. These ports are commonly known as access ports and usually require a simpler configuration than trunk ports. 7.4.4. Configuring access ports for a Cumulus Linux switch This configuration assumes that your physical node has an ethernet cable connected to the interface on the physical switch. Cumulus Linux switches use eth for management interfaces and swp for access/trunk ports. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Procedure Using the example from the Figure 7.1, "Sample network layout" diagram, swp1 (on a Cumulus Linux switch) is configured as an access port. 7.4.5. About LACP port aggregation You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends: on the physical NICs, and on the physical switch ports. Additional resources Network Interface Bonding in the Installing and managing Red Hat OpenStack Platform with director guide. 7.4.6. About MTU settings You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000 bytes) are required for certain NFS or iSCSI traffic. Note You must change MTU settings from end-to-end on all hops that the traffic is expected to pass through, including any virtual switches. Additional resources Configuring maximum transmission unit (MTU) settings 7.4.7. Configuring MTU settings for a Cumulus Linux switch Procedure This example enables jumbo frames on your Cumulus Linux switch. Note Remember to apply your changes by reloading the updated configuration: sudo ifreload -a 7.4.8. About LLDP discovery The ironic-python-agent service listens for LLDP packets from connected switches. The collected information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection process. 7.4.9. Configuring LLDP for a Cumulus Linux switch By default, the LLDP service lldpd runs as a daemon and starts when the switch boots. Procedure To view all LLDP neighbors on all ports/interfaces, run the following command: 7.5. Configuring a Extreme Exos switch 7.5.1. About trunk ports With OpenStack Networking you can connect instances to the VLANs that already exist on your physical network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the 8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch. 7.5.2. Configuring trunk ports on an Extreme Networks EXOS switch If using an X-670 series switch, refer to the following example to allow traffic for VLANs 110 and 111 to pass through to your instances. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Procedure This configuration assumes that your physical node has an ethernet cable connected to interface 24 on the physical switch. In this example, DATA and MNGT are the VLAN names. 7.5.3. About access ports Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other operational requirements, such as transporting management traffic or Block Storage data. These ports are commonly known as access ports and usually require a simpler configuration than trunk ports. 7.5.4. Configuring access ports for an Extreme Networks EXOS switch This configuration assumes that your physical node has an ethernet cable connected to interface 10 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Procedure In this configuration example, on a Extreme Networks X-670 series switch, 10 is used as an access port for eth1 . For example: 7.5.5. About LACP port aggregation You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends: on the physical NICs, and on the physical switch ports. Additional resources Network Interface Bonding in the Installing and managing Red Hat OpenStack Platform with director guide. 7.5.6. Configuring LACP on the physical NIC You can configure Link Aggregation Control Protocol (LACP) on a physical NIC. Procedure Edit the /home/stack/network-environment.yaml file: Configure the Open vSwitch bridge to use LACP: Additional resources Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment guide. 7.5.7. Configuring LACP on an Extreme Networks EXOS switch Procedure In this example, the Compute node has two NICs using VLAN 100: For example: Note You might need to adjust the timeout period in the LACP negotiation script. For more information, see https://gtacknowledge.extremenetworks.com/articles/How_To/LACP-configured-ports-interfere-with-PXE-DHCP-on-servers 7.5.8. About MTU settings You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000 bytes) are required for certain NFS or iSCSI traffic. Note You must change MTU settings from end-to-end on all hops that the traffic is expected to pass through, including any virtual switches. Additional resources Configuring maximum transmission unit (MTU) settings 7.5.9. Configuring MTU settings on an Extreme Networks EXOS switch Procedure Run the commands in this example to enable jumbo frames on an Extreme Networks EXOS switch and configure support for forwarding IP packets with 9000 bytes: Example 7.5.10. About LLDP discovery The ironic-python-agent service listens for LLDP packets from connected switches. The collected information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection process. 7.5.11. Configuring LLDP settings on an Extreme Networks EXOS switch Procedure In this example, LLDP is enabled on an Extreme Networks EXOS switch. 11 represents the port string: 7.6. Configuring a Juniper EX Series switch 7.6.1. About trunk ports With OpenStack Networking you can connect instances to the VLANs that already exist on your physical network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the 8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch. 7.6.2. Configuring trunk ports for a Juniper EX Series switch Procedure If using a Juniper EX series switch running Juniper JunOS, use the following configuration syntax to allow traffic for VLANs 110 and 111 to pass through to your instances. This configuration assumes that your physical node has an ethernet cable connected to interface ge-1/0/12 on the physical switch. Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. 7.6.3. About access ports Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other operational requirements, such as transporting management traffic or Block Storage data. These ports are commonly known as access ports and usually require a simpler configuration than trunk ports. 7.6.4. Configuring access ports for a Juniper EX Series switch This example on, a Juniper EX series switch, shows ge-1/0/13 as an access port for eth1 . + Important These values are examples. You must change the values in this example to match those in your environment. Copying and pasting these values into your switch configuration without adjustment can result in an unexpected outage. Procedure This configuration assumes that your physical node has an ethernet cable connected to interface ge-1/0/13 on the physical switch. + 7.6.5. About LACP port aggregation You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends: on the physical NICs, and on the physical switch ports. Additional resources Network Interface Bonding in the Installing and managing Red Hat OpenStack Platform with director guide. 7.6.6. Configuring LACP on the physical NIC You can configure Link Aggregation Control Protocol (LACP) on a physical NIC. Procedure Edit the /home/stack/network-environment.yaml file: Configure the Open vSwitch bridge to use LACP: Additional resources Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment guide. 7.6.7. Configuring LACP for a Juniper EX Series switch In this example, the Compute node has two NICs using VLAN 100. Procedure Physically connect the Compute node's two NICs to the switch (for example, ports 12 and 13). Create the port aggregate: Configure switch ports 12 (ge-1/0/12) and 13 (ge-1/0/13) to join the port aggregate ae1 : Note For Red Hat OpenStack Platform director deployments, in order to PXE boot from the bond, you must configure one of the bond members as lacp force-up toensure that only one bond member comes up during introspection and first boot. The bond member that you configure with lacp force-up must be the same bond member that has the MAC address in instackenv.json (the MAC address known to ironic must be the same MAC address configured with force-up). Enable LACP on port aggregate ae1 : Add aggregate ae1 to VLAN 100: Review your new port channel. The resulting output lists the new port aggregate ae1 with member ports ge-1/0/12 and ge-1/0/13 : Note Remember to apply your changes by running the commit command. 7.6.8. About MTU settings You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000 bytes) are required for certain NFS or iSCSI traffic. Note You must change MTU settings from end-to-end on all hops that the traffic is expected to pass through, including any virtual switches. Additional resources Configuring maximum transmission unit (MTU) settings 7.6.9. Configuring MTU settings for a Juniper EX Series switch This example enables jumbo frames on your Juniper EX4200 switch. Note The MTU value is calculated differently depending on whether you are using Juniper or Cisco devices. For example, 9216 on Juniper would equal to 9202 for Cisco. The extra bytes are used for L2 headers, where Cisco adds this automatically to the MTU value specified, but the usable MTU will be 14 bytes smaller than specified when using Juniper. So in order to support an MTU of 9000 on the VLANs, the MTU of 9014 would have to be configured on Juniper. Procedure For Juniper EX series switches, MTU settings are set for individual interfaces. These commands configure jumbo frames on the ge-1/0/14 and ge-1/0/15 ports: Note Remember to save your changes by running the commit command. If using a LACP aggregate, you will need to set the MTU size there, and not on the member NICs. For example, this setting configures the MTU size for the ae1 aggregate: 7.6.10. About LLDP discovery The ironic-python-agent service listens for LLDP packets from connected switches. The collected information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection process. 7.6.11. Configuring LLDP for a Juniper EX Series switch You can enable LLDP globally for all interfaces, or just for individual ones. Procedure Use the following too enable LLDP globally on your Juniper EX 4200 switch: Use the following to enable LLDP for the single interface ge-1/0/14 : Note Remember to apply your changes by running the commit command.
[ "interface GigabitEthernet1/0/12 description Trunk to Compute Node spanning-tree portfast trunk switchport trunk encapsulation dot1q switchport mode trunk switchport trunk native vlan 2 switchport trunk allowed vlan 2,110,111", "interface GigabitEthernet1/0/13 description Access port for Compute Node switchport mode access switchport access vlan 200 spanning-tree portfast", "- type: linux_bond name: bond1 mtu: 9000 bonding_options:{get_param: BondInterfaceOvsOptions}; members: - type: interface name: nic3 mtu: 9000 primary: true - type: interface name: nic4 mtu: 9000", "BondInterfaceOvsOptions: \"mode=802.3ad\"", "interface port-channel1 switchport access vlan 100 switchport mode access spanning-tree guard root", "sw01# config t Enter configuration commands, one per line. End with CNTL/Z. sw01(config) interface GigabitEthernet1/0/12 switchport access vlan 100 switchport mode access speed 1000 duplex full channel-group 10 mode active channel-protocol lacp interface GigabitEthernet1/0/13 switchport access vlan 100 switchport mode access speed 1000 duplex full channel-group 10 mode active channel-protocol lacp", "sw01# show etherchannel summary <snip> Number of channel-groups in use: 1 Number of aggregators: 1 Group Port-channel Protocol Ports ------+-------------+-----------+----------------------------------------------- 1 Po1(SD) LACP Gi1/0/12(D) Gi1/0/13(D)", "sw01# show system mtu System MTU size is 1600 bytes System Jumbo MTU size is 1600 bytes System Alternate MTU size is 1600 bytes Routing MTU size is 1600 bytes", "sw01# config t Enter configuration commands, one per line. End with CNTL/Z. sw01(config)# system mtu jumbo 9000 Changes to the system jumbo MTU will not take effect until the next reload is done", "sw01# reload Proceed with reload? [confirm]", "sw01# show system mtu System MTU size is 1600 bytes System Jumbo MTU size is 9000 bytes System Alternate MTU size is 1600 bytes Routing MTU size is 1600 bytes", "sw01# config t Enter configuration commands, one per line. End with CNTL/Z. sw01(config)# lldp run", "sw01# show lldp neighbor Capability codes: (R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device (W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other Device ID Local Intf Hold-time Capability Port ID DEP42037061562G3 Gi1/0/11 180 B,T 422037061562G3:P1 Total entries displayed: 1", "interface Ethernet1/12 description Trunk to Compute Node switchport mode trunk switchport trunk allowed vlan 2,110,111 switchport trunk native vlan 2 end", "interface Ethernet1/13 description Access port for Compute Node switchport mode access switchport access vlan 200", "- type: linux_bond name: bond1 mtu: 9000 bonding_options:{get_param: BondInterfaceOvsOptions}; members: - type: interface name: nic3 mtu: 9000 primary: true - type: interface name: nic4 mtu: 9000", "BondInterfaceOvsOptions: \"mode=802.3ad\"", "(config)# show feature | include lacp lacp 1 enabled", "interface Ethernet1/13 description Access port for Compute Node switchport mode access switchport access vlan 200 channel-group 10 mode active interface Ethernet1/13 description Access port for Compute Node switchport mode access switchport access vlan 200 channel-group 10 mode active", "interface ethernet 1/12 mtu 9216 exit", "interface ethernet 1/12 lldp transmit lldp receive no lacp suspend-individual no lacp graceful-convergence interface ethernet 1/13 lldp transmit lldp receive no lacp suspend-individual no lacp graceful-convergence", "auto bridge iface bridge bridge-vlan-aware yes bridge-ports glob swp1-2 bridge-vids 100 200", "auto bridge iface bridge bridge-vlan-aware yes bridge-ports glob swp1-2 bridge-vids 100 200 auto swp1 iface swp1 bridge-access 100 auto swp2 iface swp2 bridge-access 200", "auto swp1 iface swp1 mtu 9000", "cumulus@switchUSD netshow lldp Local Port Speed Mode Remote Port Remote Host Summary ---------- --- --------- ----- ----- ----------- -------- eth0 10G Mgmt ==== swp6 mgmt-sw IP: 10.0.1.11/24 swp51 10G Interface/L3 ==== swp1 spine01 IP: 10.0.0.11/32 swp52 10G Interface/L ==== swp1 spine02 IP: 10.0.0.11/32", "#create vlan DATA tag 110 #create vlan MNGT tag 111 #configure vlan DATA add ports 24 tagged #configure vlan MNGT add ports 24 tagged", "create vlan VLANNAME tag NUMBER configure vlan Default delete ports PORTSTRING configure vlan VLANNAME add ports PORTSTRING untagged", "#create vlan DATA tag 110 #configure vlan Default delete ports 10 #configure vlan DATA add ports 10 untagged", "- type: linux_bond name: bond1 mtu: 9000 bonding_options:{get_param: BondInterfaceOvsOptions}; members: - type: interface name: nic3 mtu: 9000 primary: true - type: interface name: nic4 mtu: 9000", "BondInterfaceOvsOptions: \"mode=802.3ad\"", "enable sharing MASTERPORT grouping ALL_LAG_PORTS lacp configure vlan VLANNAME add ports PORTSTRING tagged", "#enable sharing 11 grouping 11,12 lacp #configure vlan DATA add port 11 untagged", "enable jumbo-frame ports PORTSTRING configure ip-mtu 9000 vlan VLANNAME", "enable jumbo-frame ports 11 configure ip-mtu 9000 vlan DATA", "enable lldp ports 11", "ge-1/0/12 { description Trunk to Compute Node; unit 0 { family ethernet-switching { port-mode trunk; vlan { members [110 111]; } native-vlan-id 2; } } }", "ge-1/0/13 { description Access port for Compute Node unit 0 { family ethernet-switching { port-mode access; vlan { members 200; } native-vlan-id 2; } } }", "- type: linux_bond name: bond1 mtu: 9000 bonding_options:{get_param: BondInterfaceOvsOptions}; members: - type: interface name: nic3 mtu: 9000 primary: true - type: interface name: nic4 mtu: 9000", "BondInterfaceOvsOptions: \"mode=802.3ad\"", "chassis { aggregated-devices { ethernet { device-count 1; } } }", "interfaces { ge-1/0/12 { gigether-options { 802.3ad ae1; } } ge-1/0/13 { gigether-options { 802.3ad ae1; } } }", "interfaces { ae1 { aggregated-ether-options { lacp { active; } } } }", "interfaces { ae1 { vlan-tagging; native-vlan-id 2; unit 100 { vlan-id 100; } } }", "> show lacp statistics interfaces ae1 Aggregated interface: ae1 LACP Statistics: LACP Rx LACP Tx Unknown Rx Illegal Rx ge-1/0/12 0 0 0 0 ge-1/0/13 0 0 0 0", "set interfaces ge-1/0/14 mtu 9216 set interfaces ge-1/0/15 mtu 9216", "set interfaces ae1 mtu 9216", "lldp { interface all{ enable; } } }", "lldp { interface ge-1/0/14{ enable; } } }" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_red_hat_openstack_platform_networking/config-physical-switch-osp-network_rhosp-network
Chapter 12. Volume cloning
Chapter 12. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 12.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) Clone PVC . Click on the PVC that you want to clone and click Actions Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_and_allocating_storage_resources/volume-cloning_rhodf
11.2. Preparing and Adding NFS Storage
11.2. Preparing and Adding NFS Storage 11.2.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up and configuring NFS, see Network File System (NFS) in the Red Hat Enterprise Linux 7 Storage Administration Guide . For information on how to export an 'NFS' share, see How to export 'NFS' share from NetApp Storage / EMC SAN in Red Hat Virtualization Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization. Procedure Create the group kvm : Create the user vdsm in the group kvm : Set the ownership of your exported directory to 36:36, which gives vdsm:kvm ownership: Change the mode of the directory so that read and write access is granted to the owner, and so that read and execute access is granted to the group and other users: 11.2.2. Adding NFS Storage This procedure shows you how to attach existing NFS storage to your Red Hat Virtualization environment as a data domain. If you require an ISO or export domain, use this procedure, but select ISO or Export from the Domain Function list. Procedure In the Administration Portal, click Storage Domains . Click New Domain . Enter a Name for the storage domain. Accept the default values for the Data Center , Domain Function , Storage Type , Format , and Host lists. Enter the Export Path to be used for the storage domain. The export path should be in the format of 123.123.0.10:/data (for IPv4), [2001:0:0:0:0:0:0:5db1]:/data (for IPv6), or domain.example.com:/data . Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . The new NFS data domain has a status of Locked until the disk is prepared. The data domain is then automatically attached to the data center. 11.2.3. Increasing NFS Storage To increase the amount of NFS storage, you can either create a new storage domain and add it to an existing data center, or increase the available free space on the NFS server. For the former option, see Section 11.2.2, "Adding NFS Storage" . The following procedure explains how to increase the available free space on the existing NFS server. Increasing an Existing NFS Storage Domain Click Storage Domains . Click the NFS storage domain's name to open the details view. Click the Data Center tab and click Maintenance to place the storage domain into maintenance mode. This unmounts the existing share and makes it possible to resize the storage domain. On the NFS server, resize the storage. For Red Hat Enterprise Linux 6 systems, see Red Hat Enterprise Linux 6 Storage Administration Guide . For Red Hat Enterprise Linux 7 systems, see Red Hat Enterprise Linux 7 Storage Administration Guide . In the details view, click the Data Center tab and click Activate to mount the storage domain.
[ "groupadd kvm -g 36", "useradd vdsm -u 36 -g 36", "chown -R 36:36 /exports/data", "chmod 0755 /exports/data" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Preparing_and_Adding_NFS_Storage
Scalability and performance
Scalability and performance OpenShift Container Platform 4.14 Scaling your OpenShift Container Platform cluster and tuning performance in production environments Red Hat OpenShift Documentation Team
[ "oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster", "providerSpec: value: instanceType: <compatible_aws_instance_type> 1", "apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring", "oc create -f cluster-monitoring-config.yaml", "sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf", "sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf", "oc debug node/<node_name>", "lsblk", "#!/bin/bash set -uo pipefail for device in <device_type_glob>; do 1 /usr/sbin/blkid \"USD{device}\" &> /dev/null if [ USD? == 2 ]; then echo \"secondary device found USD{device}\" echo \"creating filesystem for etcd mount\" mkfs.xfs -L var-lib-etcd -f \"USD{device}\" &> /dev/null udevadm settle touch /etc/var-lib-etcd-mount exit fi done echo \"Couldn't find secondary block device!\" >&2 exit 77", "base64 -w0 etcd-find-secondary-device.sh", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 storage: files: - path: /etc/find-secondary-device mode: 0755 contents: source: data:text/plain;charset=utf-8;base64,<encoded_etcd_find_secondary_device_script> 1 systemd: units: - name: find-secondary-device.service enabled: true contents: | [Unit] Description=Find secondary device DefaultDependencies=false After=systemd-udev-settle.service Before=local-fs-pre.target ConditionPathExists=!/etc/var-lib-etcd-mount [Service] RemainAfterExit=yes ExecStart=/etc/find-secondary-device RestartForceExitStatus=77 [Install] WantedBy=multi-user.target - name: var-lib-etcd.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-label/var-lib-etcd Where=/var/lib/etcd Type=xfs TimeoutSec=120s [Install] RequiredBy=local-fs.target - name: sync-var-lib-etcd-to-etcd.service enabled: true contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setsebool -P rsync_full_access 1 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/semanage fcontext -a -t container_var_lib_t '/var/lib/etcd(/.*)?' ExecStart=/usr/sbin/setsebool -P rsync_full_access 0 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target - name: restorecon-var-lib-etcd.service enabled: true contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target", "oc debug node/<node_name>", "grep -w \"/var/lib/etcd\" /proc/mounts", "/dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0", "etcd member has been defragmented: <member_name> , memberID: <member_id>", "failed defrag on member: <member_name> , memberID: <member_id> : <error_message>", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "oc describe etcd/cluster | grep \"Control Plane Hardware Speed\"", "Control Plane Hardware Speed: <VALUE>", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"controlPlaneHardwareSpeed\": \"<value>\"}}'", "etcd.operator.openshift.io/cluster patched", "The Etcd \"cluster\" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: \"Faster\": supported values: \"\", \"Standard\", \"Slower\"", "oc describe etcd/cluster | grep \"Control Plane Hardware Speed\"", "Control Plane Hardware Speed: \"\"", "oc get pods -n openshift-etcd -w", "installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 1s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 2s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 34s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Pending 0 0s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:1/3 0 1s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:2/3 0 2s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 PodInitializing 0 3s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 4s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 20s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Running 0 20s", "oc describe -n openshift-etcd pod/<ETCD_PODNAME> | grep -e HEARTBEAT_INTERVAL -e ELECTION_TIMEOUT", "query=avg_over_time(pod:container_cpu_usage:sum{namespace=\"openshift-kube-apiserver\"}[30m])", "nodes: - hostName: \"example-node1.example.com\" ironicInspect: \"enabled\"", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: storage-lvmcluster namespace: openshift-storage annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: {} storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10", "cpuPartitioningMode: AllNodes", "apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: example-config namespace: example-ns spec: additionalImages: - quay.io/foobar/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e - quay.io/foobar/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adf - quay.io/foobar/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfs spaceRequired: 45 GiB 1 overrides: preCacheImage: quay.io/test_images/pre-cache:latest platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable excludePrecachePatterns: 2 - aws - vsphere", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: outputs: USDoutputs pipelines: USDpipelines", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging annotations: {} spec: managementState: \"Managed\" collection: logs: type: \"vector\"", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management", "--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: \"stable\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: {} name: example-storage-class provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" annotations: {} spec: logLevel: Normal managementState: Managed storageClassDevices: # The list of storage classes and associated devicePaths need to be specified like this example: - storageClassName: \"example-storage-class\" volumeMode: Filesystem fsType: xfs # The below must be adjusted to the hardware. # For stability and reliability, it's recommended to use persistent # naming conventions for devicePaths, such as /dev/disk/by-path. devicePaths: - /dev/disk/by-path/pci-0000:05:00.0-nvme-1 #--- ## How to verify ## 1. Create a PVC apiVersion: v1 kind: PersistentVolumeClaim metadata: name: local-pvc-name spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi storageClassName: example-storage-class #--- ## 2. Create a pod that mounts it apiVersion: v1 kind: Pod metadata: labels: run: busybox name: busybox spec: containers: - image: quay.io/quay/busybox:latest name: busybox resources: {} command: [\"/bin/sh\", \"-c\", \"sleep infinity\"] volumeMounts: - name: local-pvc mountPath: /data volumes: - name: local-pvc persistentVolumeClaim: claimName: local-pvc-name dnsPolicy: ClusterFirst restartPolicy: Always ## 3. Run the pod on the cluster and verify the size and access of the `/data` mount", "apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: \"stable\" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"USDmcp\" priority: 19 profile: performance-patch", "apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: \"\" ptpEventConfig: enableEventPublisher: true transportHost: \"http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary\" ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"boundary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "The grandmaster profile is provided for testing only It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -O -37 -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_master\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"0 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,300\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #GNSS module s /dev/ttyGNSS* -al use _0 #cat /dev/ttyGNSS_1700_0 to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: \"ordinary\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"ordinary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "--- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: \"stable\" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp", "apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators annotations: {}", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators annotations: {} spec: targetNamespaces: - vran-acceleration-operators", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators annotations: {} spec: channel: stable name: sriov-fec source: certified-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: vran-acceleration-operators annotations: {} spec: drainSkip: USDdrainSkip # true if SNO, false by default priority: 1 nodeSelector: node-role.kubernetes.io/master: \"\" acceleratorSelector: pciAddress: USDpciAddress physicalFunction: pfDriver: \"vfio-pci\" vfDriver: \"vfio-pci\" vfAmount: 16 bbDevConfig: USDbbDevConfig #Recommended configuration for Intel ACC100 (Mount Bryce) FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-acc100 #Recommended configuration for Intel N3000 FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-n3000", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"\" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: \"\" networkNamespace: openshift-sriov-network-operator vlan: \"\" spoofChk: \"\" ipam: \"\" linkState: \"\" maxTxRate: \"\" minTxRate: \"\" vlanQoS: \"\" trust: \"\" capabilities: \"\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" # Injector and OperatorWebhook pods can be disabled (set to \"false\") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the \"requests\"/\"limits\" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: \"1\" # requests: # openshift.io/<resource_name>: \"1\" enableInjector: true enableOperatorWebhook: true logLevel: 0", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator", "example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot bootMode: \"UEFI\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | grafana: enabled: false alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{\"effect\": \"PreferredDuringScheduling\"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: repositoryDigestMirrors: - USDmirrors", "apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: defaultRuntime: crun", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" containerRuntimeConfig: defaultRuntime: crun", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-kdump-config-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-kdump-config-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: container-mount-namespace-and-kubelet-conf-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt", "cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" irq-load-balancing.crio.io: \"disable\"", "cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"performance\"", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: autosizing-master spec: autoSizingReserved: true machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\"", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: 99-change-pidslimit-custom spec: machineConfigPoolSelector: matchLabels: # Set to appropriate MCP pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: pidsLimit: USDpidsLimit # Example: #pidsLimit: 4096", "required count: 1 --- apiVersion: v1 kind: Secret metadata: name: rook-ceph-external-cluster-details namespace: openshift-storage type: Opaque data: # encoded content has been made generic external_cluster_details: eyJuYW1lIjoicm9vay1jZXBoLW1vbi1lbmRwb2ludHMiLCJraW5kIjoiQ29uZmlnTWFwIiwiZGF0YSI6eyJkYXRhIjoiY2VwaHVzYTE9MS4yLjMuNDo2Nzg5IiwibWF4TW9uSWQiOiIwIiwibWFwcGluZyI6Int9In19LHsibmFtZSI6InJvb2stY2VwaC1tb24iLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJhZG1pbi1zZWNyZXQiOiJhZG1pbi1zZWNyZXQiLCJmc2lkIjoiMTExMTExMTEtMTExMS0xMTExLTExMTEtMTExMTExMTExMTExIiwibW9uLXNlY3JldCI6Im1vbi1zZWNyZXQifX0seyJuYW1lIjoicm9vay1jZXBoLW9wZXJhdG9yLWNyZWRzIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsidXNlcklEIjoiY2xpZW50LmhlYWx0aGNoZWNrZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoibW9uaXRvcmluZy1lbmRwb2ludCIsImtpbmQiOiJDZXBoQ2x1c3RlciIsImRhdGEiOnsiTW9uaXRvcmluZ0VuZHBvaW50IjoiMS4yLjMuNCwxLjIuMy4zLDEuMi4zLjIiLCJNb25pdG9yaW5nUG9ydCI6IjkyODMifX0seyJuYW1lIjoiY2VwaC1yYmQiLCJraW5kIjoiU3RvcmFnZUNsYXNzIiwiZGF0YSI6eyJwb29sIjoib2RmX3Bvb2wifX0seyJuYW1lIjoicm9vay1jc2ktcmJkLW5vZGUiLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJ1c2VySUQiOiJjc2ktcmJkLW5vZGUiLCJ1c2VyS2V5IjoiIn19LHsibmFtZSI6InJvb2stY3NpLXJiZC1wcm92aXNpb25lciIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7InVzZXJJRCI6ImNzaS1yYmQtcHJvdmlzaW9uZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoicm9vay1jc2ktY2VwaGZzLXByb3Zpc2lvbmVyIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsiYWRtaW5JRCI6ImNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLCJhZG1pbktleSI6IiJ9fSx7Im5hbWUiOiJyb29rLWNzaS1jZXBoZnMtbm9kZSIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7ImFkbWluSUQiOiJjc2ktY2VwaGZzLW5vZGUiLCJhZG1pbktleSI6ImMyVmpjbVYwIn19LHsibmFtZSI6ImNlcGhmcyIsImtpbmQiOiJTdG9yYWdlQ2xhc3MiLCJkYXRhIjp7ImZzTmFtZSI6ImNlcGhmcyIsInBvb2wiOiJtYW5pbGFfZGF0YSJ9fQ==", "required count: 1 --- apiVersion: ocs.openshift.io/v1 kind: StorageCluster metadata: name: ocs-external-storagecluster namespace: openshift-storage spec: externalStorage: enable: true labelSelector: {}", "required: yes count: 1 --- apiVersion: v1 kind: Namespace metadata: name: openshift-storage annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"", "required: yes count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage", "required count: 1 apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: gatewayConfig: routingViaHost: true # additional networks are optional and may alternatively be specified using NetworkAttachmentDefinition CRs additionalNetworks: [USDadditionalNetworks] # eg #- name: add-net-1 # namespace: app-ns-1 # rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"add-net-1\", \"plugins\": [{\"type\": \"macvlan\", \"master\": \"bond1\", \"ipam\": {}}] }' # type: Raw #- name: add-net-2 # namespace: app-ns-1 # rawCNIConfig: '{ \"cniVersion\": \"0.4.0\", \"name\": \"add-net-2\", \"plugins\": [ {\"type\": \"macvlan\", \"master\": \"bond1\", \"mode\": \"private\" },{ \"type\": \"tuning\", \"name\": \"tuning-arp\" }] }' # type: Raw", "optional copies: 0-N apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: USDname namespace: USDns spec: nodeSelector: kubernetes.io/hostname: USDnodeName config: USDconfig #eg #config: '{ # \"cniVersion\": \"0.3.1\", # \"name\": \"external-169\", # \"type\": \"vlan\", # \"master\": \"ens8f0\", # \"mode\": \"bridge\", # \"vlanid\": 169, # \"ipam\": { # \"type\": \"static\", # } #}'", "required count: 1-N apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: USDname # eg addresspool3 namespace: metallb-system annotations: metallb.universe.tf/address-pool: USDname # eg addresspool3 spec: ############## # Expected variation in this configuration addresses: [USDpools] #- 3.3.3.0/24 autoAssign: true ##############", "required count: 1-N apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: bfdprofile namespace: metallb-system spec: ################ # These values may vary. Recommended values are included as default receiveInterval: 150 # default 300ms transmitInterval: 150 # default 300ms #echoInterval: 300 # default 50ms detectMultiplier: 10 # default 3 echoMode: true passiveMode: true minimumTtl: 5 # default 254 # ################", "required count: 1-N apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: USDname # eg bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: [USDpool] # eg: # - addresspool3 peers: [USDpeers] # eg: # - peer-one communities: [USDcommunities] # Note correlation with address pool. # eg: # - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100", "required count: 1-N apiVersion: metallb.io/v1beta1 kind: BGPPeer metadata: name: USDname namespace: metallb-system spec: peerAddress: USDip # eg 192.168.1.2 peerASN: USDpeerasn # eg 64501 myASN: USDmyasn # eg 64500 routerID: USDid # eg 10.10.10.10 bfdProfile: bfdprofile", "required count: 1 apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: node-role.kubernetes.io/worker: \"\"", "required: yes count: 1 --- apiVersion: v1 kind: Namespace metadata: name: metallb-system annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"", "required: yes count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system", "required: yes count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux boolean for tap cni plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service", "optional (though expected for all) count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: USDname # eg sriov-network-abcd namespace: openshift-sriov-network-operator spec: capabilities: \"USDcapabilities\" # eg '{\"mac\": true, \"ips\": true}' ipam: \"USDipam\" # eg '{ \"type\": \"host-local\", \"subnet\": \"10.3.38.0/24\" }' networkNamespace: USDnns # eg cni-test resourceName: USDresource # eg resourceTest", "optional (though expected in all deployments) count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator spec: {} # USDspec eg #deviceType: netdevice #nicSelector: deviceID: \"1593\" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: \"8086\" #nodeSelector: kubernetes.io/hostname: host.sample.lab #numVfs: 20 #priority: 99 #excludeTopology: true #resourceName: resourceNameABCD", "required count: 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: \"\" enableInjector: true enableOperatorWebhook: true", "required: yes count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic", "required: yes count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management", "required: yes count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator", "Optional count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: # Periodic is the default setting infoRefreshMode: Periodic machineConfigPoolSelector: matchLabels: # This label must match the pool(s) you want to run NUMA-aligned workloads pools.operator.machineconfiguration.openshift.io/worker: \"\"", "required count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"4.14\" name: numaresources-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace", "required: yes count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources annotations: workload.openshift.io/allowed: management", "required: yes count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources", "Optional count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: #cacheResyncPeriod: \"0\" # Image spec should be the latest for the release imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.14.0\" #logLevel: \"Trace\" schedulerName: topo-aware-scheduler", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 40-load-kernel-modules-control-plane spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwp4dF91MzI= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,c2N0cA== filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 40-load-kernel-modules-worker spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwp4dF91MzI= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf", "required count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - type: \"kafka\" name: kafka-open url: tcp://10.11.12.13:9092/test pipelines: - inputRefs: - infrastructure #- application - audit labels: label1: test1 label2: test2 label3: test3 label4: test4 label5: test5 name: all-to-default outputRefs: - kafka-open", "required count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: vector managementState: Managed", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management", "--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: \"stable\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic", "required count: 1..N apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-operators-disconnected namespace: openshift-marketplace spec: displayName: Red Hat Disconnected Operators Catalog image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h #status: connectionState: lastObservedState: READY", "required count: 1 apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp spec: repositoryDigestMirrors: [] - USDmirrors", "required count: 1 apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true", "optional count: 1 --- apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: dedicatedServiceMonitors: enabled: true prometheusK8s: retention: 15d volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 100Gi alertmanagerMain: volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 20Gi", "required count: 1 apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: # Some pods want the kernel stack to ignore IPv6 router Advertisement. kubeletconfig.experimental: | {\"allowedUnsafeSysctls\":[\"net.ipv6.conf.all.accept_ra\"]} spec: cpu: # node0 CPUs: 0-17,36-53 # node1 CPUs: 18-34,54-71 # siblings: (0,36), (1,37) # we want to reserve the first Core of each NUMA socket # # no CPU left behind! all-cpus == isolated + reserved isolated: USDisolated # eg 1-17,19-35,37-53,55-71 reserved: USDreserved # eg 0,18,36,54 # Guaranteed QoS pods will disable IRQ balancing for cores allocated to the pod. # default value of globallyDisableIrqLoadBalancing is false globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: # 32GB per numa node - count: USDcount # eg 64 size: 1G machineConfigPoolSelector: # For SNO: machineconfiguration.openshift.io/role: 'master' pools.operator.machineconfiguration.openshift.io/worker: '' nodeSelector: # For SNO: node-role.kubernetes.io/master: \"\" node-role.kubernetes.io/worker: \"\" workloadHints: realTime: false highPowerConsumption: false perPodPowerManagement: true realTimeKernel: enabled: false numa: # All guaranteed QoS containers get resources from a single NUMA node topologyPolicy: \"single-numa-node\" net: userLevelNetworking: false", "required pods per cluster / pods per node = total number of nodes needed", "2200 / 500 = 4.4", "2200 / 20 = 110", "required pods per cluster / total number of nodes = expected pods per node", "--- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: \"USD{IMAGE}\" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR2_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR3_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR4_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 clusterIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: \"[A-Za-z0-9]{255}\" required: false labels: template: deployment-config-template", "oc create quota <name> --hard=count/<resource>.<group>=<quota> 1", "oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'", "openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu: 0 0", "cat gpu-quota.yaml", "apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1", "oc create -f gpu-quota.yaml", "resourcequota/gpu-quota created", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1", "oc create pod gpu-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: gpu-pod-s46h7 namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1", "oc get pods", "NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1", "oc create -f gpu-pod.yaml", "Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1", "apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5", "apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 requests.ephemeral-storage: 2Gi 4 limits.cpu: \"2\" 5 limits.memory: 2Gi 6 limits.ephemeral-storage: 4Gi 7", "apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 limits.ephemeral-storage: \"4Gi\" 4 scopes: - NotTerminating 5", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 limits.ephemeral-storage: \"1Gi\" 4 scopes: - Terminating 5", "apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7", "oc create -f <resource_quota_definition> [-n <project_name>]", "oc create -f core-object-counts.yaml -n demoproject", "oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota>", "oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 resourcequota \"test\" created oc describe quota test Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4", "oc get quota -n demoproject NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m", "oc describe quota core-object-counts -n demoproject Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10", "kubernetesMasterConfig: apiLevels: - v1beta3 - v1 apiServerArguments: null controllerArguments: resource-quota-sync-period: - \"10s\"", "master-restart api master-restart controllers", "admissionConfig: pluginConfig: ResourceQuota: configuration: apiVersion: resourcequota.admission.k8s.io/v1alpha1 kind: Configuration limitedResources: - resource: persistentvolumeclaims 1 matchContains: - gold.storageclass.storage.k8s.io/requests.storage 2", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"core-resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 - type: \"Container\" max: cpu: \"2\" 6 memory: \"1Gi\" 7 min: cpu: \"100m\" 8 memory: \"4Mi\" 9 default: cpu: \"300m\" 10 memory: \"200Mi\" 11 defaultRequest: cpu: \"200m\" 12 memory: \"100Mi\" 13 maxLimitRequestRatio: cpu: \"10\" 14", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"openshift-resource-limits\" spec: limits: - type: openshift.io/Image max: storage: 1Gi 1 - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 - type: \"Pod\" max: cpu: \"2\" 4 memory: \"1Gi\" 5 ephemeral-storage: \"1Gi\" 6 min: cpu: \"1\" 7 memory: \"1Gi\" 8", "{ \"apiVersion\": \"v1\", \"kind\": \"LimitRange\", \"metadata\": { \"name\": \"pvcs\" 1 }, \"spec\": { \"limits\": [{ \"type\": \"PersistentVolumeClaim\", \"min\": { \"storage\": \"2Gi\" 2 }, \"max\": { \"storage\": \"50Gi\" 3 } } ] } }", "oc create -f <limit_range_file> -n <project>", "oc get limits -n demoproject", "NAME AGE resource-limits 6d", "oc describe limits resource-limits -n demoproject", "Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - -", "oc delete limits <limit_name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-enable-rfs spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;charset=US-ASCII,%23%20turn%20on%20Receive%20Flow%20Steering%20%28RFS%29%20for%20all%20network%20interfaces%0ASUBSYSTEM%3D%3D%22net%22%2C%20ACTION%3D%3D%22add%22%2C%20RUN%7Bprogram%7D%2B%3D%22/bin/bash%20-c%20%27for%20x%20in%20/sys/%24DEVPATH/queues/rx-%2A%3B%20do%20echo%208192%20%3E%20%24x/rps_flow_cnt%3B%20%20done%27%22%0A filesystem: root mode: 0644 path: /etc/udev/rules.d/70-persistent-net.rules - contents: source: data:text/plain;charset=US-ASCII,%23%20define%20sock%20flow%20enbtried%20for%20%20Receive%20Flow%20Steering%20%28RFS%29%0Anet.core.rps_sock_flow_entries%3D8192%0A filesystem: root mode: 0644 path: /etc/sysctl.d/95-enable-rps.conf", "oc create -f enable-rfs.yaml", "oc get mc", "oc delete mc 50-enable-rfs", "cat 05-master-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-master-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805", "cat 05-worker-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805", "oc create -f 05-master-kernelarg-hpav.yaml", "oc create -f 05-worker-kernelarg-hpav.yaml", "oc delete -f 05-master-kernelarg-hpav.yaml", "oc delete -f 05-worker-kernelarg-hpav.yaml", "<domain> <iothreads>3</iothreads> 1 <devices> <disk type=\"block\" device=\"disk\"> 2 <driver ... iothread=\"2\"/> </disk> </devices> </domain>", "<disk type=\"block\" device=\"disk\"> <driver name=\"qemu\" type=\"raw\" cache=\"none\" io=\"native\" iothread=\"1\"/> </disk>", "<memballoon model=\"none\"/>", "sysctl kernel.sched_migration_cost_ns=60000", "kernel.sched_migration_cost_ns=60000", "cgroup_controllers = [ \"cpu\", \"devices\", \"memory\", \"blkio\", \"cpuacct\" ]", "systemctl restart libvirtd", "echo 0 > /sys/module/kvm/parameters/halt_poll_ns", "echo 80000 > /sys/module/kvm/parameters/halt_poll_ns", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "oc get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME TUNED APPLIED DEGRADED AGE master-0 openshift-control-plane True False 6h33m master-1 openshift-control-plane True False 6h33m master-2 openshift-control-plane True False 6h33m worker-a openshift-node True False 6h28m worker-b openshift-node True False 6h28m", "oc get co/node-tuning -n openshift-cluster-node-tuning-operator", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE node-tuning 4.14.1 True False True 60m 1/5 Profiles with bootcmdline conflict", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: ingress namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=A custom OpenShift ingress profile include=openshift-control-plane [sysctl] net.ipv4.ip_local_port_range=\"1024 65535\" net.ipv4.tcp_tw_reuse=1 name: openshift-ingress recommend: - match: - label: tuned.openshift.io/ingress-node-label priority: 10 profile: openshift-ingress", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/ -name tuned.conf -printf '%h\\n' | sed 's|^.*/||'", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-hpc-compute namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile for HPC compute workloads include=openshift-node,hpc-compute name: openshift-node-hpc-compute recommend: - match: - label: tuned.openshift.io/openshift-node-hpc-compute priority: 20 profile: openshift-node-hpc-compute", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-no-reapply-sysctl namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.max_map_count=>524288 name: openshift-no-reapply-sysctl recommend: - match: - label: tuned.openshift.io/openshift-no-reapply-sysctl priority: 15 profile: openshift-no-reapply-sysctl operand: tunedConfig: reapply_sysctl: false", "apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio=\"55\" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile", "oc --kubeconfig=\"USDMGMT_KUBECONFIG\" create -f tuned-1.yaml", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: tuningConfig: - name: tuned-1 status:", "oc --kubeconfig=\"USDHC_KUBECONFIG\" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME AGE default 7m36s rendered 7m36s tuned-1 65s", "oc --kubeconfig=\"USDHC_KUBECONFIG\" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s", "oc --kubeconfig=\"USDHC_KUBECONFIG\" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio", "vm.dirty_ratio = 55", "apiVersion: v1 kind: ConfigMap metadata: name: tuned-hugepages namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 name: openshift-node-hugepages recommend: - priority: 20 profile: openshift-node-hugepages", "oc --kubeconfig=\"<management_cluster_kubeconfig>\" create -f tuned-hugepages.yaml 1", "hcp create nodepool aws --cluster-name <hosted_cluster_name> \\ 1 --name <nodepool_name> \\ 2 --node-count <nodepool_replicas> \\ 3 --instance-type <instance_type> \\ 4 --render > hugepages-nodepool.yaml", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: hugepages-nodepool namespace: clusters spec: management: upgradeType: InPlace tuningConfig: - name: tuned-hugepages", "oc --kubeconfig=\"<management_cluster_kubeconfig>\" create -f hugepages-nodepool.yaml", "oc --kubeconfig=\"<hosted_cluster_kubeconfig>\" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME AGE default 123m hugepages-8dfb1fed 1m23s rendered 123m", "oc --kubeconfig=\"<hosted_cluster_kubeconfig>\" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 openshift-node True False 132m nodepool-1-worker-2 openshift-node True False 131m hugepages-nodepool-worker-1 openshift-node-hugepages True False 4m8s hugepages-nodepool-worker-2 openshift-node-hugepages True False 3m57s", "oc --kubeconfig=\"<hosted_cluster_kubeconfig>\" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline", "BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-... hugepagesz=2M hugepages=50", "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "β”œβ”€init.scope β”‚ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice β”œβ”€kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice β”‚ β”œβ”€crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope β”‚ └─32706 /pause", "cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"", "apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources", "oc create -f nro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources", "oc create -f nro-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"4.14\" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f nro-sub.yaml", "oc get csv -n openshift-numaresources", "NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.14.2 numaresources-operator 4.14.2 Succeeded", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1", "oc create -f nrop.yaml", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: logLevel: Normal nodeGroups: - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-ht - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-cnf - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-other", "oc get numaresourcesoperators.nodetopology.openshift.io", "NAME AGE numaresourcesoperator 27s", "oc get all -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.14\" 1", "oc create -f nro-scheduler.yaml", "oc get all -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s pod/secondary-scheduler-847cb74f84-9whlm 1/1 Running 0 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 3 3 3 3 3 node-role.kubernetes.io/worker= 98s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 12m deployment.apps/secondary-scheduler 1/1 1 1 10m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7d9d84c58d 1 1 1 12m replicaset.apps/secondary-scheduler-847cb74f84 1 1 1 10m", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"3\" reserved: 0-2 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 nodeSelector: node-role.kubernetes.io/worker: \"\" numa: topologyPolicy: single-numa-node 2 realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-tuning spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 kubeletConfig: cpuManagerPolicy: \"static\" 2 cpuManagerReconcilePeriod: \"5s\" reservedSystemCPUs: \"0,1\" 3 memoryManagerPolicy: \"Static\" 4 evictionHard: memory.available: \"100Mi\" kubeReserved: memory: \"512Mi\" reservedMemory: - numaNode: 0 limits: memory: \"1124Mi\" systemReserved: memory: \"512Mi\" topologyManagerPolicy: \"single-numa-node\" 5", "oc create -f nro-kubeletconfig.yaml", "oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'", "\"topo-aware-scheduler\"", "apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: \"100Mi\" cpu: \"10\" requests: memory: \"100Mi\" cpu: \"10\" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: [\"/bin/sh\", \"-c\"] args: [ \"while true; do sleep 1h; done;\" ] resources: limits: memory: \"100Mi\" cpu: \"8\" requests: memory: \"100Mi\" cpu: \"8\"", "oc create -f nro-deployment.yaml", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numa-deployment-1-6c4f5bdb84-wgn6g 2/2 Running 0 5m2s numaresources-controller-manager-7d9d84c58d-4v65j 1/1 Running 0 18m numaresourcesoperator-worker-7d96r 2/2 Running 4 43m numaresourcesoperator-worker-crsht 2/2 Running 2 43m numaresourcesoperator-worker-jp9mw 2/2 Running 2 43m secondary-scheduler-847cb74f84-fpncj 1/1 Running 0 18m", "oc describe pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m45s topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-6c4f5bdb84-wgn6g to worker-1", "oc get pods -n openshift-numaresources -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES numa-deployment-1-6c4f5bdb84-wgn6g 0/2 Running 0 82m 10.128.2.50 worker-1 <none> <none>", "oc describe noderesourcetopologies.topology.node.k8s.io worker-1", "Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node", "oc get pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources -o jsonpath=\"{ .status.qosClass }\"", "Guaranteed", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: infoRefreshMode: Periodic 1 infoRefreshPeriod: 10s 2 podsFingerprinting: Enabled 3 name: worker", "oc get numaresop numaresourcesoperator -o json | jq '.status'", "\"config\": { \"infoRefreshMode\": \"Periodic\", \"infoRefreshPeriod\": \"10s\", \"podsFingerprinting\": \"Enabled\" }, \"name\": \"worker\"", "oc get crd | grep noderesourcetopologies", "NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z", "oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'", "topo-aware-scheduler", "oc get noderesourcetopologies.topology.node.k8s.io", "NAME AGE compute-0.example.com 17h compute-1.example.com 17h", "oc get noderesourcetopologies.topology.node.k8s.io -o yaml", "apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:38Z\" generation: 63760 name: worker-0 resourceVersion: \"8450223\" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262352048128\" available: \"262352048128\" capacity: \"270107316224\" name: memory - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269231067136\" available: \"269231067136\" capacity: \"270573244416\" name: memory - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:37Z\" generation: 62061 name: worker-1 resourceVersion: \"8450129\" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262391033856\" available: \"262391033856\" capacity: \"270146301952\" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269192085504\" available: \"269192085504\" capacity: \"270534262784\" name: memory type: Node kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc get NUMAResourcesScheduler", "NAME AGE numaresourcesscheduler 92m", "oc delete NUMAResourcesScheduler numaresourcesscheduler", "numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.14\" cacheResyncPeriod: \"5s\" 1", "oc create -f nro-scheduler-cacheresync.yaml", "numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created", "oc get crd | grep numaresourcesschedulers", "NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z", "oc get numaresourcesschedulers.nodetopology.openshift.io", "NAME AGE numaresourcesscheduler 3h26m", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m", "oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources", "I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"", "oc get NUMAResourcesScheduler", "NAME AGE numaresourcesscheduler 90m", "oc delete NUMAResourcesScheduler numaresourcesscheduler", "numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.14\" logLevel: Debug", "oc create -f nro-scheduler-debug.yaml", "numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created", "oc get crd | grep numaresourcesschedulers", "NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z", "oc get numaresourcesschedulers.nodetopology.openshift.io", "NAME AGE numaresourcesscheduler 3h26m", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m", "oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources", "I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"", "oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath=\"{.status.daemonsets[0]}\"", "{\"name\":\"numaresourcesoperator-worker\",\"namespace\":\"openshift-numaresources\"}", "oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath=\"{.spec.selector.matchLabels}\"", "{\"name\":\"resource-topology\"}", "oc get pods -n openshift-numaresources -l name=resource-topology -o wide", "NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com", "oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c", "I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: \"0\": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved \"0-1\" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online \"0-103\" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable \"2-103\" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi", "Info: couldn't find configuration in \"/etc/resource-topology-exporter/config.yaml\"", "oc get configmap", "NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h", "oc get kubeletconfig -o yaml", "machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled", "oc get mcp worker -o yaml", "labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\"", "oc edit mcp worker -o yaml", "labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\" cnf-worker-tuning: enabled", "oc get configmap", "NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h", "oc adm must-gather --image=registry.redhat.io/numaresources-must-gather/numaresources-must-gather-rhel9:v4.14", "oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"router\",\"livenessProbe\":{\"timeoutSeconds\":5},\"readinessProbe\":{\"timeoutSeconds\":5}}]}}}}'", "oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness: Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=5s period=10s #success=1 #failure=3", "oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"tuningOptions\":{\"reloadInterval\":\"15s\"}}}'", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service", "oc apply -f mount_namespace_config.yaml", "machineconfig.machineconfiguration.openshift.io/99-kubens-master created machineconfig.machineconfiguration.openshift.io/99-kubens-worker created", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-03d4bc4befb0f4ed3566a2c8f7636751 False True False 3 0 0 0 45m worker rendered-worker-10577f6ab0117ed1825f8af2ac687ddf False True False 3 1 1", "oc wait --for=condition=Updated mcp --all --timeout=30m", "machineconfigpool.machineconfiguration.openshift.io/master condition met machineconfigpool.machineconfiguration.openshift.io/worker condition met", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# readlink /proc/1/ns/mnt", "mnt:[4026531953]", "sh-4.4# readlink /proc/USD(pgrep kubelet)/ns/mnt", "mnt:[4026531840]", "sh-4.4# readlink /proc/USD(pgrep crio)/ns/mnt", "mnt:[4026531840]", "ssh core@<node_name>", "[core@control-plane-1 ~]USD sudo kubensenter findmnt", "kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt TARGET SOURCE FSTYPE OPTIONS / /dev/sda4[/ostree/deploy/rhcos/deploy/32074f0e8e5ec453e56f5a8a7bc9347eaa4172349ceab9c22b709d9d71a3f4b0.0] | xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota shm tmpfs", "[core@control-plane-1 ~]USD sudo kubensenter", "kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt", "[Unit] Description=Example service [Service] ExecStart=/usr/bin/kubensenter /path/to/original/command arg1 arg2", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address>", "oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>'", "oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached'", "oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-'", "oc get baremetalhosts -n openshift-machine-api -o jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.provisioning.state}{\"\\n\"}{end}'", "master-0.example.com managed master-1.example.com managed master-2.example.com managed worker-0.example.com managed worker-1.example.com managed worker-2.example.com managed", "oc adm cordon <bare_metal_host> 1", "oc adm drain <bare_metal_host> --force=true", "oc patch <bare_metal_host> --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/online\", \"value\": false}]'", "oc adm uncordon <bare_metal_host>", "apiVersion: v1 kind: Namespace metadata: name: openshift-bare-metal-events labels: name: openshift-bare-metal-events openshift.io/cluster-monitoring: \"true\"", "oc create -f bare-metal-events-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bare-metal-event-relay-group namespace: openshift-bare-metal-events spec: targetNamespaces: - openshift-bare-metal-events", "oc create -f bare-metal-events-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bare-metal-event-relay-subscription namespace: openshift-bare-metal-events spec: channel: \"stable\" name: bare-metal-event-relay source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f bare-metal-events-sub.yaml", "oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.name,Phase:.status.phase", "oc get pods -n amq-interconnect", "NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h", "oc get pods -n openshift-bare-metal-events", "NAME READY STATUS RESTARTS AGE hw-event-proxy-operator-controller-manager-74d5649b7c-dzgtl 2/2 Running 0 25s", "curl https://<bmc_ip_address>/redfish/v1/EventService --insecure -H 'Content-Type: application/json' -u \"<bmc_username>:<password>\"", "{ \"@odata.context\": \"/redfish/v1/USDmetadata#EventService.EventService\", \"@odata.id\": \"/redfish/v1/EventService\", \"@odata.type\": \"#EventService.v1_0_2.EventService\", \"Actions\": { \"#EventService.SubmitTestEvent\": { \"[email protected]\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"target\": \"/redfish/v1/EventService/Actions/EventService.SubmitTestEvent\" } }, \"DeliveryRetryAttempts\": 3, \"DeliveryRetryIntervalSeconds\": 30, \"Description\": \"Event Service represents the properties for the service\", \"EventTypesForSubscription\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"[email protected]\": 5, \"Id\": \"EventService\", \"Name\": \"Event Service\", \"ServiceEnabled\": true, \"Status\": { \"Health\": \"OK\", \"HealthRollup\": \"OK\", \"State\": \"Enabled\" }, \"Subscriptions\": { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\" } }", "oc get route -n openshift-bare-metal-events", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hw-event-proxy hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com hw-event-proxy-service 9087 edge None", "apiVersion: metal3.io/v1alpha1 kind: BMCEventSubscription metadata: name: sub-01 namespace: openshift-machine-api spec: hostName: <hostname> 1 destination: <proxy_service_url> 2 context: ''", "oc create -f bmc_sub.yaml", "oc delete -f bmc_sub.yaml", "curl -i -k -X POST -H \"Content-Type: application/json\" -d '{\"Destination\": \"https://<proxy_service_url>\", \"Protocol\" : \"Redfish\", \"EventTypes\": [\"Alert\"], \"Context\": \"root\"}' -u <bmc_username>:<password> 'https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions' -v", "HTTP/1.1 201 Created Server: AMI MegaRAC Redfish Service Location: /redfish/v1/EventService/Subscriptions/1 Allow: GET, POST Access-Control-Allow-Origin: * Access-Control-Expose-Headers: X-Auth-Token Access-Control-Allow-Headers: X-Auth-Token Access-Control-Allow-Credentials: true Cache-Control: no-cache, must-revalidate Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json>; rel=describedby Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json> Link: </redfish/v1/EventService/Subscriptions>; path= ETag: \"1651135676\" Content-Type: application/json; charset=UTF-8 OData-Version: 4.0 Content-Length: 614 Date: Thu, 28 Apr 2022 08:47:57 GMT", "curl --globoff -H \"Content-Type: application/json\" -k -X GET --user <bmc_username>:<password> https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 435 100 435 0 0 399 0 0:00:01 0:00:01 --:--:-- 399 { \"@odata.context\": \"/redfish/v1/USDmetadata#EventDestinationCollection.EventDestinationCollection\", \"@odata.etag\": \"\" 1651137375 \"\", \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\", \"@odata.type\": \"#EventDestinationCollection.EventDestinationCollection\", \"Description\": \"Collection for Event Subscriptions\", \"Members\": [ { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions/1\" }], \"[email protected]\": 1, \"Name\": \"Event Subscriptions Collection\" }", "curl --globoff -L -w \"%{http_code} %{url_effective}\\n\" -k -u <bmc_username>:<password >-H \"Content-Type: application/json\" -d '{}' -X DELETE https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions/1", "apiVersion: \"event.redhat-cne.org/v1alpha1\" kind: \"HardwareEvent\" metadata: name: \"hardware-event\" spec: nodeSelector: node-role.kubernetes.io/hw-event: \"\" 1 logLevel: \"debug\" 2 msgParserTimeout: \"10\" 3", "oc create -f hardware-event.yaml", "apiVersion: v1 kind: Secret metadata: name: redfish-basic-auth type: Opaque stringData: 1 username: <bmc_username> password: <bmc_password> # BMC host DNS or IP address hostaddr: <bmc_host_ip_address>", "oc create -f hw-event-bmc-secret.yaml", "[ { \"id\": \"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\": \"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" } ]", "{ \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }", "{ \"id\":\"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\":\"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\":\"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\":\"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }", "OK", "containers: - name: cloud-event-sidecar image: cloud-event-sidecar args: - \"--metrics-addr=127.0.0.1:9091\" - \"--store-path=/store\" - \"--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" - \"--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\" 1 - \"--api-port=8089\"", "apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: \"true\" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- labels: app: hugepages-example spec: containers: - securityContext: capabilities: add: [ \"IPC_LOCK\" ] image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage - mountPath: /etc/podinfo name: podinfo resources: limits: hugepages-1Gi: 2Gi memory: \"1Gi\" cpu: \"1\" requests: hugepages-1Gi: 2Gi env: - name: REQUESTS_HUGEPAGES_1GI <.> valueFrom: resourceFieldRef: containerName: example resource: requests.hugepages-1Gi volumes: - name: hugepage emptyDir: medium: HugePages - name: podinfo downwardAPI: items: - path: \"hugepages_1G_request\" <.> resourceFieldRef: containerName: example resource: requests.hugepages-1Gi divisor: 1Gi", "oc create -f hugepages-volume-pod.yaml", "oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- env | grep REQUESTS_HUGEPAGES_1GI", "REQUESTS_HUGEPAGES_1GI=2147483648", "oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- cat /etc/podinfo/hugepages_1G_request", "2", "oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages", "oc create -f hugepages-tuned-boottime.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"", "oc create -f hugepages-mcp.yaml", "oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: thp-workers-profile namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom tuned profile for OpenShift to turn off THP on worker nodes include=openshift-node [vm] transparent_hugepages=never name: openshift-thp-never-worker recommend: - match: - label: node-role.kubernetes.io/worker priority: 25 profile: openshift-thp-never-worker", "oc create -f thp-disable-tuned.yaml", "oc get profile -n openshift-cluster-node-tuning-operator", "cat /sys/kernel/mm/transparent_hugepage/enabled", "always madvise [never]", "oc label node <node_name> node-role.kubernetes.io/worker-cnf=\"\" 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-cnf 1 labels: machineconfiguration.openshift.io/role: worker-cnf 2 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-cnf], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-cnf: \"\" 3", "oc apply -f mcp-worker-cnf.yaml", "machineconfigpool.machineconfiguration.openshift.io/worker-cnf created", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c7c3c1b4ed5ffef95234d451490 True False False 3 3 3 0 6h46m worker rendered-worker-168f52b168f151e4f853259729b6azc4 True False False 2 2 2 0 6h46m worker-cnf rendered-worker-cnf-168f52b168f151e4f853259729b6azc4 True False False 1 1 1 0 73s", "oc adm must-gather", "tar cvaf must-gather.tar.gz <must_gather_folder> 1", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c8c3c0b4ed5feef95434d455490 True False False 3 3 3 0 8h worker rendered-worker-668f56a164f151e4a853229729b6adc4 True False False 2 2 2 0 8h worker-cnf rendered-worker-cnf-668f56a164f151e4a853229729b6adc4 True False False 1 1 1 0 79m", "podman login registry.redhat.io", "Username: <user_name> Password: <password>", "podman run --rm --entrypoint performance-profile-creator registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.14 -h", "A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled", "podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.14 --info log --must-gather-dir-path /must-gather", "level=info msg=\"Cluster info:\" level=info msg=\"MCP 'master' nodes:\" level=info msg=--- level=info msg=\"MCP 'worker' nodes:\" level=info msg=\"Node: host.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"Node: host1.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=--- level=info msg=\"MCP 'worker-cnf' nodes:\" level=info msg=\"Node: host2.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=---", "podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.14 --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml", "level=info msg=\"Nodes targeted by worker-cnf MCP are: [worker-2]\" level=info msg=\"NUMA cell(s): 1\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"1 reserved CPUs allocated: 0 \" level=info msg=\"2 isolated CPUs allocated: 2-3\" level=info msg=\"Additional Kernel Args based on configuration: []\"", "cat my-performance-profile.yaml", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: \"1\" reserved: \"0\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "oc apply -f my-performance-profile.yaml", "performanceprofile.performance.openshift.io/performance created", "vi run-perf-profile-creator.sh", "#!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename \"USD0\") readonly CMD=\"USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator\" readonly IMG_EXISTS_CMD=\"USD{CONTAINER_RUNTIME} image exists\" readonly IMG_PULL_CMD=\"USD{CONTAINER_RUNTIME} image pull\" readonly MUST_GATHER_VOL=\"/must-gather\" NTO_IMG=\"registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.14\" MG_TARBALL=\"\" DATA_DIR=\"\" usage() { print \"Wrapper usage:\" print \" USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]\" print \"\" print \"Options:\" print \" -h help for USD{CURRENT_SCRIPT}\" print \" -p Node Tuning Operator image\" print \" -t path to a must-gather tarball\" USD{IMG_EXISTS_CMD} \"USD{NTO_IMG}\" && USD{CMD} \"USD{NTO_IMG}\" -h } function cleanup { [ -d \"USD{DATA_DIR}\" ] && rm -rf \"USD{DATA_DIR}\" } trap cleanup EXIT exit_error() { print \"error: USD*\" usage exit 1 } print() { echo \"USD*\" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} \"USD{NTO_IMG}\" || USD{IMG_PULL_CMD} \"USD{NTO_IMG}\" || exit_error \"Node Tuning Operator image not found\" [ -n \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file path is mandatory\" [ -f \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file not found\" DATA_DIR=USD(mktemp -d -t \"USD{CURRENT_SCRIPT}XXXX\") || exit_error \"Cannot create the data directory\" tar -zxf \"USD{MG_TARBALL}\" --directory \"USD{DATA_DIR}\" || exit_error \"Cannot decompress the must-gather tarball\" chmod a+rx \"USD{DATA_DIR}\" return 0 } main() { while getopts ':hp:t:' OPT; do case \"USD{OPT}\" in h) usage exit 0 ;; p) NTO_IMG=\"USD{OPTARG}\" ;; t) MG_TARBALL=\"USD{OPTARG}\" ;; ?) exit_error \"invalid argument: USD{OPTARG}\" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v \"USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z\" \"USD{NTO_IMG}\" \"USD@\" --must-gather-dir-path \"USD{MUST_GATHER_VOL}\" echo \"\" 1>&2 } main \"USD@\"", "chmod a+x run-perf-profile-creator.sh", "podman login registry.redhat.io", "Username: <user_name> Password: <password>", "./run-perf-profile-creator.sh -h", "Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Node Tuning Operator image -t path to a must-gather tarball A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled", "./run-perf-profile-creator.sh -t /<path_to_must_gather_dir>/must-gather.tar.gz -- --info=log", "level=info msg=\"Cluster info:\" level=info msg=\"MCP 'master' nodes:\" level=info msg=--- level=info msg=\"MCP 'worker' nodes:\" level=info msg=\"Node: host.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"Node: host1.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=--- level=info msg=\"MCP 'worker-cnf' nodes:\" level=info msg=\"Node: host2.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=---", "./run-perf-profile-creator.sh -t /path-to-must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml", "cat my-performance-profile.yaml", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: \"1\" reserved: \"0\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "oc apply -f my-performance-profile.yaml", "performanceprofile.performance.openshift.io/performance created", "Error: failed to compute the reserved and isolated CPUs: please ensure that reserved-cpu-count plus offlined-cpu-count should be in the range [0,1]", "Error: failed to compute the reserved and isolated CPUs: please specify the offlined CPU count in the range [0,1]", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: cnf-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - default_hugepagesz=1GB - hugepagesz=1G - intel_iommu=on cpu: isolated: <CPU_ISOLATED> reserved: <CPU_RESERVED> hugepages: defaultHugepagesSize: 1G pages: - count: <HUGEPAGES_COUNT> node: 0 size: 1G nodeSelector: node-role.kubernetes.io/worker: '' realTimeKernel: enabled: false globallyDisableIrqLoadBalancing: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "workloadHints: highPowerConsumption: false realTime: false", "workloadHints: highPowerConsumption: false realTime: true", "workloadHints: highPowerConsumption: true realTime: true", "workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: workload-hints spec: workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false 1", "podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.14 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather --power-consumption-mode=low-latency \\ 1 --per-pod-power-management=true > my-performance-profile.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: [.....] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - cpufreq.default_governor=schedutil 1", "spec: profile: - data: | [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct = <x> 1", "\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: \"0-4,9\" 1 isolated: \"5-8\" 2 nodeSelector: 3 node-role.kubernetes.io/worker: \"\"", "lscpu --all --extended", "CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000", "cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list", "0-4", "cpu: isolated: 0,4 reserved: 1-3,5-7", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true", "find /proc/irq -name effective_affinity -printf \"%p: \" -exec cat {} \\;", "/proc/irq/0/effective_affinity: 1 /proc/irq/1/effective_affinity: 8 /proc/irq/2/effective_affinity: 0 /proc/irq/3/effective_affinity: 1 /proc/irq/4/effective_affinity: 2 /proc/irq/5/effective_affinity: 1 /proc/irq/6/effective_affinity: 1 /proc/irq/7/effective_affinity: 1 /proc/irq/8/effective_affinity: 1 /proc/irq/9/effective_affinity: 2 /proc/irq/10/effective_affinity: 1 /proc/irq/11/effective_affinity: 1 /proc/irq/12/effective_affinity: 4 /proc/irq/13/effective_affinity: 1 /proc/irq/14/effective_affinity: 1 /proc/irq/15/effective_affinity: 1 /proc/irq/24/effective_affinity: 2 /proc/irq/25/effective_affinity: 4 /proc/irq/26/effective_affinity: 2 /proc/irq/27/effective_affinity: 1 /proc/irq/28/effective_affinity: 8 /proc/irq/29/effective_affinity: 4 /proc/irq/30/effective_affinity: 4 /proc/irq/31/effective_affinity: 8 /proc/irq/32/effective_affinity: 8 /proc/irq/33/effective_affinity: 1 /proc/irq/34/effective_affinity: 2", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1", "apiVersion: v1 kind: Pod metadata: name: dynamic-irq-pod annotations: irq-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: dynamic-irq-pod image: \"registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14\" command: [\"sleep\", \"10h\"] resources: requests: cpu: 2 memory: \"200M\" limits: cpu: 2 memory: \"200M\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" runtimeClassName: performance-dynamic-irq-profile", "oc get pod -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dynamic-irq-pod 1/1 Running 0 5h33m <ip-address> <node-name> <none> <none>", "oc exec -it dynamic-irq-pod -- /bin/bash -c \"grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'\"", "Cpus_allowed_list: 2-3", "oc debug node/<node-name>", "Starting pod/<node-name>-debug To use host binaries, run `chroot /host` Pod IP: <ip-address> If you don't see a command prompt, try pressing enter. sh-4.4#", "sh-4.4# chroot /host", "sh-4.4#", "cat /proc/irq/default_smp_affinity", "33", "find /proc/irq/ -name smp_affinity_list -exec sh -c 'i=\"USD1\"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \\;", "/proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5", "hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 4 node: 0 1", "oc debug node/ip-10-0-141-105.ec2.internal", "grep -i huge /proc/meminfo", "AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ##", "oc describe node worker-0.ocp4poc.example.com | grep -i huge", "hugepages-1g=true hugepages-###: ### hugepages-###: ###", "spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G", "oc edit -f <your_profile_name>.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - interfaceName: \"eth1\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth*\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"!eno1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "oc apply -f <your_profile_name>.yaml", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true", "ethtool -l <device>", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4", "ethtool -l <device>", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "udevadm info -p /sys/class/net/ens4 E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4", "udevadm info -p /sys/class/net/eth0 E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3", "WARNING tuned.plugins.base: instance net_test: no matching devices available", "apiVersion: v1 kind: Pod metadata: name: dynamic-low-latency-pod annotations: cpu-quota.crio.io: \"disable\" 1 cpu-load-balancing.crio.io: \"disable\" 2 irq-load-balancing.crio.io: \"disable\" 3 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: dynamic-low-latency-pod image: \"registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14\" command: [\"sleep\", \"10h\"] resources: requests: cpu: 2 memory: \"200M\" limits: cpu: 2 memory: \"200M\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" 4 runtimeClassName: performance-dynamic-low-latency-profile 5", "oc get pod -o wide", "NAME READY STATUS RESTARTS AGE IP NODE dynamic-low-latency-pod 1/1 Running 0 5h33m 10.131.0.10 cnf-worker.example.com", "oc exec -it dynamic-low-latency-pod -- /bin/bash -c \"grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'\"", "Cpus_allowed_list: 2-3", "oc debug node/<node-name>", "sh-4.4# chroot /host", "sh-4.4#", "sh-4.4# cat /proc/irq/default_smp_affinity", "33", "sh-4.4# find /proc/irq/ -name smp_affinity_list -exec sh -c 'i=\"USD1\"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \\;", "/proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5", "apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: \"200Mi\" cpu: \"1\" requests: memory: \"200Mi\" cpu: \"1\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc apply -f qos-pod.yaml --namespace=qos-example", "oc get pod qos-demo --namespace=qos-example --output=yaml", "spec: containers: status: qosClass: Guaranteed", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile status: runtimeClass: performance-manual", "apiVersion: v1 kind: Pod metadata: # annotations: # cpu-load-balancing.crio.io: \"disable\" # # spec: # runtimeClassName: performance-<profile_name> #", "apiVersion: v1 kind: Pod metadata: # annotations: # cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"performance\" # # spec: # runtimeClassName: performance-<profile_name> #", "apiVersion: v1 kind: Pod metadata: annotations: cpu-quota.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name> #", "apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>", "Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h", "oc describe mcp worker-cnf", "Message: Node node-worker-cnf is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\" Reason: 1 nodes are reporting degraded status on sync", "oc describe performanceprofiles performance", "Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\". Reason: MCPDegraded Status: True Type: Degraded", "oc adm must-gather", "[must-gather ] OUT Using must-gather plug-in image: quay.io/openshift-release When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at \"<cluster_version>\" ClusterOperators: All healthy and stable [must-gather ] OUT namespace/openshift-must-gather-8fh4x created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-rhlgc created [must-gather-5564g] POD 2023-07-17T10:17:37.610340849Z Gathering data for ns/openshift-cluster-version [must-gather-5564g] POD 2023-07-17T10:17:38.786591298Z Gathering data for ns/default [must-gather-5564g] POD 2023-07-17T10:17:39.117418660Z Gathering data for ns/openshift [must-gather-5564g] POD 2023-07-17T10:17:39.447592859Z Gathering data for ns/kube-system [must-gather-5564g] POD 2023-07-17T10:17:39.803381143Z Gathering data for ns/openshift-etcd Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at \"<cluster_version>\" ClusterOperators: All healthy and stable", "tar cvaf must-gather.tar.gz must-gather-local.5421342344627712289 1", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"hwlatdetect\" --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0908 15:25:20.023712 27 request.go:601] Waited for 1.046586367s due to client-side throttling, not priority and fairness, request: GET:https://api.hlxcl6.lab.eng.tlv2.redhat.com:6443/apis/imageregistry.operator.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662650718 Will run 1 of 194 specs [...] β€’ Failure [283.574 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:228 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:236 Log file created at: 2022/09/08 15:25:27 Running on machine: hwlatdetect-b6n4n Binary: Built with gc go1.17.12 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0908 15:25:27.160620 1 node.go:39] Environment information: /proc/cmdline: BOOT_IMAGE=(hd1,gpt3)/ostree/rhcos-c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/vmlinuz-4.18.0-372.19.1.el8_6.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/0 ip=dhcp root=UUID=5f80c283-f6e6-4a27-9b47-a287157483b2 rw rootflags=prjquota boot=UUID=773bf59a-bafd-48fc-9a87-f62252d739d3 skew_tick=1 nohz=on rcu_nocbs=0-3 tuned.non_isolcpus=0000ffff,ffffffff,fffffff0 systemd.cpu_affinity=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79 intel_iommu=on iommu=pt isolcpus=managed_irq,0-3 nohz_full=0-3 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off skew_tick=1 rcutree.kthread_prio=11 + + I0908 15:25:27.160830 1 node.go:46] Environment information: kernel version 4.18.0-372.19.1.el8_6.x86_64 I0908 15:25:27.160857 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 100 --window 10000000us --width 950000us] F0908 15:27:10.603523 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 100 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 326us 2 Samples recorded: 5 Samples exceeding threshold: 5 ts: 1662650739.017274507, inner:6, outer:6 ts: 1662650749.257272414, inner:14, outer:326 ts: 1662650779.977272835, inner:314, outer:12 ts: 1662650800.457272384, inner:3, outer:9 ts: 1662650810.697273520, inner:3, outer:2 [...] JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:476 Ran 1 of 194 Specs in 365.797 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (366.08s) FAIL", "hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0", "hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"cyclictest\" --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=cyclictest I0908 13:01:59.193776 27 request.go:601] Waited for 1.046228824s due to client-side throttling, not priority and fairness, request: GET:https://api.compute-1.example.com:6443/apis/packages.operators.coreos.com/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662642118 Will run 1 of 194 specs [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the cyclictest image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:220 Ran 1 of 194 Specs in 161.151 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (161.48s) FAIL", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 Histogram Overflow at cycle number: Thread 0: Thread 1: Thread 2: Thread 3: Thread 4: Thread 5: Thread 6: Thread 7: Thread 8: Thread 9: Thread 10: Thread 11: Thread 12: Thread 13: Thread 14: Thread 15:", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 Histogram Overflow at cycle number: Thread 0: 155922 Thread 1: 110064 Thread 2: 110064 Thread 3: 110063 155921 Thread 4: 110063 155921 Thread 5: 155920 Thread 6: Thread 7: 110062 Thread 8: 110062 Thread 9: 155919 Thread 10: 110061 155919 Thread 11: 155918 Thread 12: 155918 Thread 13: 110060 Thread 14: 110060 Thread 15: 110059 155917", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"oslat\" --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=oslat I0908 12:51:55.999393 27 request.go:601] Waited for 1.044848101s due to client-side throttling, not priority and fairness, request: GET:https://compute-1.example.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662641514 Will run 1 of 194 specs [...] β€’ Failure [77.833 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the oslat image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:128 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:153 The current latency 304 is bigger than the expected one 1 : 1 [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the oslat image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:177 Ran 1 of 194 Specs in 161.091 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (161.42s) FAIL", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14 /usr/bin/test-run.sh --report <report_folder_path> -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junitdest:<junit_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14 /usr/bin/test-run.sh --junit <junit_folder_path> -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=master registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14 /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\" --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14 /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance -e IMAGE_REGISTRY=\"<disconnected_registry>\" -e CNF_TESTS_IMAGE=\"cnf-tests-rhel8:v4.14\" /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\" --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"<custom_image_registry>\" -e CNF_TESTS_IMAGE=\"<custom_cnf-tests_image>\" -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14 /usr/bin/test-run.sh --ginkgo.timeout=\"24h\"", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc create ns cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests", "SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'}", "TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath=\"{.data['\\.dockercfg']}\" | base64 --decode | jq '.[\"image-registry.openshift-image-registry.svc:5000\"].auth')", "echo \"{\\\"auths\\\": { \\\"USDREGISTRY\\\": { \\\"auth\\\": USDTOKEN } }}\" > dockerauth.json", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:4.14 /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true -a=USD(pwd)/dockerauth.json -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\" --ginkgo.timeout=\"24h\"", "[ { \"registry\": \"public.registry.io:5000\", \"image\": \"imageforcnftests:4.14\" } ]", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14 /usr/bin/mirror --registry \"my.local.registry:5000/\" --images \"/kubeconfig/images.json\" | oc image mirror -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.14 get nodes", "openshift-install create manifests --dir=<cluster-install-dir>", "vi <cluster-install-dir>/manifests/config-node-default-profile.yaml", "apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: workerLatencyProfile: \"Default\"", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1", "oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5", "- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"", "oc get KubeAPIServer -o yaml | grep -A 1 default-", "default-not-ready-toleration-seconds: - \"300\" default-unreachable-toleration-seconds: - \"300\"", "oc get KubeControllerManager -o yaml | grep -A 1 node-monitor", "node-monitor-grace-period: - 40s", "oc debug node/<worker-node-name> chroot /host cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency", "\"nodeStatusUpdateFrequency\": \"10s\"", "apiVersion: v1 baseDomain: devcluster.openshift.com cpuPartitioningMode: AllNodes 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "oc get packagemanifests -n openshift-marketplace node-observability-operator", "NAME CATALOG AGE node-observability-operator Red Hat Operators 9h", "oc new-project node-observability-operator", "cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-observability-operator namespace: node-observability-operator spec: targetNamespaces: [] EOF", "cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-observability-operator namespace: node-observability-operator spec: channel: alpha name: node-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc -n node-observability-operator get sub node-observability-operator -o yaml | yq '.status.installplan.name'", "install-dt54w", "oc -n node-observability-operator get ip <install_plan_name> -o yaml | yq '.status.phase'", "COMPLETE", "oc get deploy -n node-observability-operator", "NAME READY UP-TO-DATE AVAILABLE AGE node-observability-operator-controller-manager 1/1 1 1 40h", "oc login -u kubeadmin https://<HOSTNAME>:6443", "oc project node-observability-operator", "apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservability metadata: name: cluster 1 spec: nodeSelector: kubernetes.io/hostname: <node_hostname> 2 type: crio-kubelet", "apply -f nodeobservability.yaml", "nodeobservability.olm.openshift.io/cluster created", "oc get nob/cluster -o yaml | yq '.status.conditions'", "conditions: conditions: - lastTransitionTime: \"2022-07-05T07:33:54Z\" message: 'DaemonSet node-observability-ds ready: true NodeObservabilityMachineConfig ready: true' reason: Ready status: \"True\" type: Ready", "apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservabilityRun metadata: name: nodeobservabilityrun spec: nodeObservabilityRef: name: cluster", "oc apply -f nodeobservabilityrun.yaml", "oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq '.status.conditions'", "conditions: - lastTransitionTime: \"2022-07-07T14:57:34Z\" message: Ready to start profiling reason: Ready status: \"True\" type: Ready - lastTransitionTime: \"2022-07-07T14:58:10Z\" message: Profiling query done reason: Finished status: \"True\" type: Finished", "for a in USD(oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq .status.agents[].name); do echo \"agent USD{a}\" mkdir -p \"/tmp/USD{a}\" for p in USD(oc exec \"USD{a}\" -c node-observability-agent -- bash -c \"ls /run/node-observability/*.pprof\"); do f=\"USD(basename USD{p})\" echo \"copying USD{f} to /tmp/USD{a}/USD{f}\" oc exec \"USD{a}\" -c node-observability-agent -- cat \"USD{p}\" > \"/tmp/USD{a}/USD{f}\" done done", "export ISO_IMAGE_NAME=<iso_image_name> 1", "export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1", "export OCP_VERSION=<ocp_version> 1", "sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.14/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME}", "sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.14/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME}", "wget http://USD(hostname)/USD{ISO_IMAGE_NAME}", "Saving to: rhcos-4.14.1-x86_64-live.x86_64.iso rhcos-4.14.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s", "oc edit AgentServiceConfig", "- cpuArchitecture: x86_64 openshiftVersion: \"4.14\" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<host>/<path>/rhcos-live.x86_64.iso", "apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: multicluster-engine 1 labels: app: assisted-service data: ca-bundle.crt: | 2 -----BEGIN CERTIFICATE----- <certificate_contents> -----END CERTIFICATE----- registries.conf: | 3 unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"quay.io/example-repository\" 4 mirror-by-digest-only = true [[registry.mirror]] location = \"mirror1.registry.corp.com:5000/example-repository\" 5", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: multicluster-engine 1 spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: assisted-installer-mirror-config 2 osImages: - openshiftVersion: <ocp_version> 3 url: <iso_url> 4", "oc edit AgentServiceConfig agent", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: unauthenticatedRegistries: - example.registry.com - example.registry2.com", "oc debug node/<node_name>", "sh-4.4# podman login -u kubeadmin -p USD(oc whoami -t) <unauthenticated_registry>", "Login Succeeded!", "{ \"args\": [ \"-c\", \"mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator\" 1 ], \"command\": [ \"/bin/bash\" ], \"image\": \"registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10\", 2 3 \"name\": \"policy-generator-install\", \"imagePullPolicy\": \"Always\", \"volumeMounts\": [ { \"mountPath\": \"/.config\", \"name\": \"kustomize\" } ] }", "oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json", "oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json", "oc apply -k out/argocd/deployment", "oc -n openshift-gitops get applications.argoproj.io clusters -o jsonpath='{.spec.syncPolicy.syncOptions}' |jq", "[ \"CreateNamespace=true\", \"PrunePropagationPolicy=background\", \"RespectIgnoreDifferences=true\" ]", "kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background", "podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 extract /home/ztp --tar | tar x -C ./out", "example/ β”œβ”€β”€ policygentemplates β”‚ β”œβ”€β”€ kustomization.yaml β”‚ └── source-crs/ └── siteconfig β”œβ”€β”€ extra-manifests └── kustomization.yaml", "example/ β”œβ”€β”€ policygentemplates β”‚ β”œβ”€β”€ common-ranGen.yaml β”‚ β”œβ”€β”€ example-sno-site.yaml β”‚ β”œβ”€β”€ group-du-sno-ranGen.yaml β”‚ β”œβ”€β”€ group-du-sno-validator-ranGen.yaml β”‚ β”œβ”€β”€ kustomization.yaml β”‚ β”œβ”€β”€ source-crs/ β”‚ └── ns.yaml └── siteconfig β”œβ”€β”€ example-sno.yaml β”œβ”€β”€ extra-manifests/ 1 β”œβ”€β”€ custom-manifests/ 2 β”œβ”€β”€ KlusterletAddonConfigOverride.yaml └── kustomization.yaml", "β”œβ”€β”€ policygentemplates β”‚ β”œβ”€β”€ kustomization.yaml 1 β”‚ β”œβ”€β”€ version_4.13 2 β”‚ β”‚ β”œβ”€β”€ common-ranGen.yaml β”‚ β”‚ β”œβ”€β”€ group-du-sno-ranGen.yaml β”‚ β”‚ β”œβ”€β”€ group-du-sno-validator-ranGen.yaml β”‚ β”‚ β”œβ”€β”€ helix56-v413.yaml β”‚ β”‚ β”œβ”€β”€ kustomization.yaml 3 β”‚ β”‚ β”œβ”€β”€ ns.yaml β”‚ β”‚ └── source-crs/ 4 β”‚ β”‚ └── reference-crs/ 5 β”‚ β”‚ └── custom-crs/ 6 β”‚ └── version_4.14 7 β”‚ β”œβ”€β”€ common-ranGen.yaml β”‚ β”œβ”€β”€ group-du-sno-ranGen.yaml β”‚ β”œβ”€β”€ group-du-sno-validator-ranGen.yaml β”‚ β”œβ”€β”€ helix56-v414.yaml β”‚ β”œβ”€β”€ kustomization.yaml 8 β”‚ β”œβ”€β”€ ns.yaml β”‚ └── source-crs/ 9 β”‚ └── reference-crs/ 10 β”‚ └── custom-crs/ 11 └── siteconfig β”œβ”€β”€ kustomization.yaml β”œβ”€β”€ version_4.13 β”‚ β”œβ”€β”€ helix56-v413.yaml β”‚ β”œβ”€β”€ kustomization.yaml β”‚ β”œβ”€β”€ extra-manifest/ 12 β”‚ └── custom-manifest/ 13 └── version_4.14 β”œβ”€β”€ helix57-v414.yaml β”œβ”€β”€ kustomization.yaml β”œβ”€β”€ extra-manifest/ 14 └── custom-manifest/ 15", "extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2", "resources: - version_4.13 1 #- version_4.14 2", "mkdir -p ./update", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 extract /home/ztp --tar | tar x -C ./update", "oc get managedcluster -l 'local-cluster!=true'", "oc label managedcluster -l 'local-cluster!=true' ztp-done=", "oc delete -f update/argocd/deployment/clusters-app.yaml", "oc patch -f policies-app.yaml -p '{\"metadata\": {\"finalizers\": [\"resources-finalizer.argocd.argoproj.io\"]}}' --type merge", "oc delete -f update/argocd/deployment/policies-app.yaml", "β”œβ”€β”€ policygentemplates β”‚ β”œβ”€β”€ site1-ns.yaml β”‚ β”œβ”€β”€ site1.yaml β”‚ β”œβ”€β”€ site2-ns.yaml β”‚ β”œβ”€β”€ site2.yaml β”‚ β”œβ”€β”€ common-ns.yaml β”‚ β”œβ”€β”€ common-ranGen.yaml β”‚ β”œβ”€β”€ group-du-sno-ranGen-ns.yaml β”‚ β”œβ”€β”€ group-du-sno-ranGen.yaml β”‚ └── kustomization.yaml └── siteconfig β”œβ”€β”€ site1.yaml β”œβ”€β”€ site2.yaml └── kustomization.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-ranGen.yaml - group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml", "{ \"args\": [ \"-c\", \"mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator\" 1 ], \"command\": [ \"/bin/bash\" ], \"image\": \"registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10\", 2 3 \"name\": \"policy-generator-install\", \"imagePullPolicy\": \"Always\", \"volumeMounts\": [ { \"mountPath\": \"/.config\", \"name\": \"kustomize\" } ] }", "oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json", "oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json", "oc apply -k out/argocd/deployment", "grep -r \"ztp-deploy-wave\" out/source-crs", "apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: \"1\" name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" spec: clusterRef: name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: \"{{ .Site.SshPublicKey }}\" proxy: \"{{ .Cluster.ProxySettings }}\" pullSecretRef: name: \"{{ .Site.PullSecretRef.Name }}\" ignitionConfigOverride: \"{{ .Cluster.IgnitionConfigOverride }}\" nmStateConfigLabelSelector: matchLabels: nmstate-label: \"{{ .Cluster.ClusterName }}\" additionalNTPSources: \"{{ .Cluster.AdditionalNTPSources }}\"", "~/example-ztp/install └── site-install β”œβ”€β”€ siteconfig-example.yaml β”œβ”€β”€ InfraEnv-example.yaml", "clusters: crTemplates: InfraEnv: \"InfraEnv-example.yaml\"", "ssh -i /path/to/privatekey core@<host_name>", "cat /proc/cmdline", "export CLUSTERNS=example-sno", "oc create namespace USDCLUSTERNS", "example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot bootMode: \"UEFI\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "oc describe node example-node.example.com", "Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos", "export CLUSTER=<clusterName>", "oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Completed\")]}' | jq", "curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'", "oc get AgentClusterInstall -n <cluster_name>", "oc get managedcluster", "oc get applications.argoproj.io -n openshift-gitops clusters -o yaml", "syncResult: resources: - group: ran.openshift.io kind: SiteConfig message: The Kubernetes API could not find ran.openshift.io/SiteConfig for requested resource spoke-sno/spoke-sno. Make sure the \"SiteConfig\" CRD is installed on the destination cluster", "siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directory", "Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown", "oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"disableVirtualMediaTLS\": true}}'", "kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background", "oc delete policy -n <namespace> <policy_name>", "oc delete -k out/argocd/deployment", "--- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"common\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" 1 sourceFiles: 2 - fileName: SriovSubscription.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: SriovOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscription.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: PtpOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogNS.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogSubscription.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: StorageNS.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: StorageSubscription.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: ReduceMonitoringFootprint.yaml policyName: \"config-policy\" - fileName: OperatorHub.yaml 3 policyName: \"config-policy\" - fileName: DefaultCatsrc.yaml 4 policyName: \"config-policy\" 5 metadata: name: redhat-operators spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.9 - fileName: DisconnectedICSP.yaml policyName: \"config-policy\" spec: repositoryDigestMirrors: - mirrors: - registry.example.com:5000 source: registry.redhat.io", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno\" namespace: \"ztp-group\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: PtpConfigSlave.yaml policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" phc2sysOpts: \"-a -r -n 24\"", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: inform severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 ..", "export CLUSTER=<clusterName>", "oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[-1:]}' | jq", "{ \"lastTransitionTime\": \"2022-11-09T07:28:09Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" }", "oc get policies -n USDCLUSTER", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 3h42m ztp-common.common-subscriptions-policy inform NonCompliant 3h42m ztp-group.group-du-sno-config-policy inform NonCompliant 3h42m ztp-group.group-du-sno-validator-du-policy inform NonCompliant 3h42m ztp-install.example1-common-config-policy-pjz9s enforce Compliant 167m ztp-install.example1-common-subscriptions-policy-zzd9k enforce NonCompliant 164m ztp-site.example1-config-policy inform NonCompliant 3h42m ztp-site.example1-perf-policy inform NonCompliant 3h42m", "export NS=<namespace>", "oc get policy -n USDNS", "oc describe -n openshift-gitops application policies", "Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/policies/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not find test.yaml under source-crs/: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-52463179; exit status 1: exit status 1 Type: ComparisonError", "Status: Sync: Compared To: Destination: Namespace: policies-sub Server: https://kubernetes.default.svc Source: Path: policies Repo URL: https://git.com/ran-sites/policies/.git Target Revision: master Status: Error", "oc get policy -n USDCLUSTER", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 13d ztp-common.common-subscriptions-policy inform Compliant 13d ztp-group.group-du-sno-config-policy inform Compliant 13d Ztp-group.group-du-sno-validator-du-policy inform Compliant 13d ztp-site.example-sno-config-policy inform Compliant 13d", "oc get placementrule -n USDNS", "oc get placementrule -n USDNS <placementRuleName> -o yaml", "oc get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq", "oc get policy -n USDCLUSTER", "export CLUSTER=<clusterName>", "oc get clustergroupupgrades -n ztp-install USDCLUSTER", "oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Ready\")]}'", "oc delete clustergroupupgrades -n ztp-install USDCLUSTER", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" disableDrain: true enableInjector: true enableOperatorWebhook: true", "- fileName: SriovOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-remove namespace: default spec: managedPolicies: - ztp-group.group-du-sno-config-policy enable: false clusters: - spoke1 - spoke2 remediationStrategy: maxConcurrency: 2 timeout: 240 batchTimeoutAction:", "oc create -f cgu-remove.yaml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-remove --patch '{\"spec\":{\"enable\":true}}' --type=merge", "oc get <kind> <changed_cr_name>", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-ztp-group.group-du-sno-config-policy enforce 17m default ztp-group.group-du-sno-config-policy inform NonCompliant 15h", "oc get <kind> <changed_cr_name>", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 extract /home/ztp --tar | tar x -C ./out", "out └── argocd └── example β”œβ”€β”€ policygentemplates β”‚ β”œβ”€β”€ common-ranGen.yaml β”‚ β”œβ”€β”€ example-sno-site.yaml β”‚ β”œβ”€β”€ group-du-sno-ranGen.yaml β”‚ β”œβ”€β”€ group-du-sno-validator-ranGen.yaml β”‚ β”œβ”€β”€ kustomization.yaml β”‚ └── ns.yaml └── siteconfig β”œβ”€β”€ example-sno.yaml β”œβ”€β”€ KlusterletAddonConfigOverride.yaml └── kustomization.yaml", "mkdir -p ./site-install", "example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot bootMode: \"UEFI\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 generator install site-1-sno.yaml /output", "site-install └── site-1-sno β”œβ”€β”€ site-1_agentclusterinstall_example-sno.yaml β”œβ”€β”€ site-1-sno_baremetalhost_example-node1.example.com.yaml β”œβ”€β”€ site-1-sno_clusterdeployment_example-sno.yaml β”œβ”€β”€ site-1-sno_configmap_example-sno.yaml β”œβ”€β”€ site-1-sno_infraenv_example-sno.yaml β”œβ”€β”€ site-1-sno_klusterletaddonconfig_example-sno.yaml β”œβ”€β”€ site-1-sno_machineconfig_02-master-workload-partitioning.yaml β”œβ”€β”€ site-1-sno_machineconfig_predefined-extra-manifests-master.yaml β”œβ”€β”€ site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml β”œβ”€β”€ site-1-sno_managedcluster_example-sno.yaml β”œβ”€β”€ site-1-sno_namespace_example-sno.yaml └── site-1-sno_nmstateconfig_example-node1.example.com.yaml", "mkdir -p ./site-machineconfig", "podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 generator install -E site-1-sno.yaml /output", "site-machineconfig └── site-1-sno β”œβ”€β”€ site-1-sno_machineconfig_02-master-workload-partitioning.yaml β”œβ”€β”€ site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml", "mkdir -p ./ref", "podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 generator config -N . /output", "ref └── customResource β”œβ”€β”€ common β”œβ”€β”€ example-multinode-site β”œβ”€β”€ example-sno β”œβ”€β”€ group-du-3node β”œβ”€β”€ group-du-3node-validator β”‚ └── Multiple-validatorCRs β”œβ”€β”€ group-du-sno β”œβ”€β”€ group-du-sno-validator β”œβ”€β”€ group-du-standard └── group-du-standard-validator └── Multiple-validatorCRs", "oc describe node example-node.example.com", "Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos", "apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 clusterRef: name: <cluster_name> namespace: <cluster_name> pullSecretRef: name: pull-secret", "ssh -i /path/to/privatekey core@<host_name>", "cat /proc/cmdline", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.14.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.14.0-x86_64 2", "oc apply -f clusterImageSet-4.14.yaml", "apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2", "oc apply -f cluster-namespace.yaml", "oc apply -R ./site-install/site-sno-1", "oc get managedcluster", "oc get agent -n <cluster_name>", "oc describe agent -n <cluster_name>", "oc get agentclusterinstall -n <cluster_name>", "oc describe agentclusterinstall -n <cluster_name>", "oc get managedclusteraddon -n <cluster_name>", "oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig", "oc get managedcluster", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h", "oc get clusterdeployment -n <cluster_name>", "NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h", "oc describe agentclusterinstall -n <cluster_name> <cluster_name>", "oc delete managedcluster <cluster_name>", "oc delete namespace <cluster_name>", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"<site_name>\" namespace: \"<site_name>\" spec: baseDomain: \"example.com\" cpuPartitioningMode: AllNodes 1", "oc debug node/example-sno-1", "sh-4.4# pgrep ovn | while read i; do taskset -cp USDi; done", "pid 8481's current affinity list: 0-1,52-53 pid 8726's current affinity list: 0-1,52-53 pid 9088's current affinity list: 0-1,52-53 pid 9945's current affinity list: 0-1,52-53 pid 10387's current affinity list: 0-1,52-53 pid 12123's current affinity list: 0-1,52-53 pid 13313's current affinity list: 0-1,52-53", "sh-4.4# pgrep systemd | while read i; do taskset -cp USDi; done", "pid 1's current affinity list: 0-1,52-53 pid 938's current affinity list: 0-1,52-53 pid 962's current affinity list: 0-1,52-53 pid 1197's current affinity list: 0-1,52-53", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 08-set-rcu-normal-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-kdump-config-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-kdump-config-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: defaultRuntime: crun", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" containerRuntimeConfig: defaultRuntime: crun", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\" --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{\"effect\": \"PreferredDuringScheduling\"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: repositoryDigestMirrors: - USDmirrors", "apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: \"stable\" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "--- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: \"stable\" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: \"stable\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging annotations: {} spec: managementState: \"Managed\" collection: logs: type: \"vector\"", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: outputs: USDoutputs pipelines: USDpipelines", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: \"ordinary\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"ordinary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary\" ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"boundary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "The grandmaster profile is provided for testing only It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -O -37 -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_master\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"0 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,300\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #GNSS module s /dev/ttyGNSS* -al use _0 #cat /dev/ttyGNSS_1700_0 to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: \"\" ptpEventConfig: enableEventPublisher: true transportHost: \"http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\"", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"USDmcp\" priority: 19 profile: performance-patch", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" # Injector and OperatorWebhook pods can be disabled (set to \"false\") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the \"requests\"/\"limits\" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: \"1\" # requests: # openshift.io/<resource_name>: \"1\" enableInjector: true enableOperatorWebhook: true logLevel: 0", "containers: - name: my-sriov-workload-container resources: limits: openshift.io/<resource_name>: \"1\" requests: openshift.io/<resource_name>: \"1\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"\" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: \"\" networkNamespace: openshift-sriov-network-operator vlan: \"\" spoofChk: \"\" ipam: \"\" linkState: \"\" maxTxRate: \"\" minTxRate: \"\" vlanQoS: \"\" trust: \"\" capabilities: \"\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt", "installConfigOverrides: \"{\\\"capabilities\\\":{\\\"baselineCapabilitySet\\\": \\\"None\\\" }}\"", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | grafana: enabled: false alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h", "apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager data: pprof-config.yaml: | disabled: True", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: odf-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 deviceSelector: paths: - /usr/disk/by-path/pci-0000:11:00.0-nvme-1 thinPoolConfig: name: thin-pool-1 overprovisionRatio: 10 sizePercent: 90", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true", "spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"module_blacklist=irdma\"", "spec: profile: - name: performance-patch # The 'include' line must match the associated PerformanceProfile name, for example: # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from the [sysctl] section data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable", "OCP_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}')", "DTK_IMAGE=USD(oc adm release info --image-for=driver-toolkit quay.io/openshift-release-dev/ocp-release:USDOCP_VERSION-x86_64)", "podman run --rm USDDTK_IMAGE rpm -qa | grep 'kernel-rt-core-' | sed 's#kernel-rt-core-##'", "4.18.0-305.49.1.rt7.121.el8_4.x86_64", "oc debug node/<node_name>", "sh-4.4# uname -r", "4.18.0-305.49.1.rt7.121.el8_4.x86_64", "oc get operatorhub cluster -o yaml", "spec: disableAllDefaultSources: true", "oc get catalogsource -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.target\\.workload\\.openshift\\.io/management}{\"\\n\"}{end}'", "certified-operators -- {\"effect\": \"PreferredDuringScheduling\"} community-operators -- {\"effect\": \"PreferredDuringScheduling\"} ran-operators 1 redhat-marketplace -- {\"effect\": \"PreferredDuringScheduling\"} redhat-operators -- {\"effect\": \"PreferredDuringScheduling\"}", "oc get namespaces -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.workload\\.openshift\\.io/allowed}{\"\\n\"}{end}'", "default -- openshift-apiserver -- management openshift-apiserver-operator -- management openshift-authentication -- management openshift-authentication-operator -- management", "oc get -n openshift-logging ClusterLogForwarder instance -o yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: creationTimestamp: \"2022-07-19T21:51:41Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"1030342\" uid: 8c1a842d-80c5-447a-9150-40350bdf40f0 spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open", "oc get -n openshift-logging clusterloggings.logging.openshift.io instance -o yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: creationTimestamp: \"2022-07-07T18:22:56Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"235796\" uid: ef67b9b8-0e65-4a10-88ff-ec06922ea796 spec: collection: logs: fluentd: {} type: fluentd curation: curator: schedule: 30 3 * * * type: curator managementState: Managed", "oc get consoles.operator.openshift.io cluster -o jsonpath=\"{ .spec.managementState }\"", "Removed", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# systemctl status chronyd", "● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:chronyd(8) man:chrony.conf(5)", "PTP_POD_NAME=USD(oc get pods -n openshift-ptp -l app=linuxptp-daemon -o name)", "oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'", "sending: GET PORT_DATA_SET 3cecef.fffe.7a7020-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 3cecef.fffe.7a7020-2 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-2 portState LISTENING logMinDelayReqInterval 0 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2", "oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET TIME_STATUS_NP'", "sending: GET TIME_STATUS_NP 3cecef.fffe.7a7020-0 seq 0 RESPONSE MANAGEMENT TIME_STATUS_NP master_offset 10 1 ingress_time 1657275432697400530 cumulativeScaledRateOffset +0.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true 2 gmIdentity 3c2c30.ffff.670e00", "oc logs USDPTP_POD_NAME -n openshift-ptp -c linuxptp-daemon-container", "phc2sys[56020.341]: [ptp4l.1.config] CLOCK_REALTIME phc offset -1731092 s2 freq -1546242 delay 497 ptp4l[56020.390]: [ptp4l.1.config] master offset -2 s2 freq -5863 path delay 541 ptp4l[56020.390]: [ptp4l.0.config] master offset -8 s2 freq -10699 path delay 533", "oc get sriovoperatorconfig -n openshift-sriov-network-operator default -o jsonpath=\"{.spec.disableDrain}{'\\n'}\"", "true", "oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o jsonpath=\"{.items[*].status.syncStatus}{'\\n'}\"", "Succeeded", "oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o yaml", "apiVersion: v1 items: - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState status: interfaces: - Vfs: - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.0 vendor: \"8086\" vfID: 0 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.1 vendor: \"8086\" vfID: 1 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.2 vendor: \"8086\" vfID: 2 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.3 vendor: \"8086\" vfID: 3 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.4 vendor: \"8086\" vfID: 4 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.5 vendor: \"8086\" vfID: 5 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.6 vendor: \"8086\" vfID: 6 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.7 vendor: \"8086\" vfID: 7", "oc get PerformanceProfile openshift-node-performance-profile -o yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: creationTimestamp: \"2022-07-19T21:51:31Z\" finalizers: - foreground-deletion generation: 1 name: openshift-node-performance-profile resourceVersion: \"33558\" uid: 217958c0-9122-4c62-9d4d-fdc27c31118c spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 - efi=runtime cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true status: conditions: - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Available - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Upgradeable - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Progressing - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Degraded runtimeClass: performance-openshift-node-performance-profile tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-openshift-node-performance-profile", "oc get performanceprofile openshift-node-performance-profile -o jsonpath=\"{range .status.conditions[*]}{ @.type }{' -- '}{@.status}{'\\n'}{end}\"", "Available -- True Upgradeable -- True Progressing -- False Degraded -- False", "oc get tuneds.tuned.openshift.io -n openshift-cluster-node-tuning-operator performance-patch -o yaml", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: creationTimestamp: \"2022-07-18T10:33:52Z\" generation: 1 name: performance-patch namespace: openshift-cluster-node-tuning-operator resourceVersion: \"34024\" uid: f9799811-f744-4179-bf00-32d4436c08fd spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-23,26-47 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch", "oc get networks.operator.openshift.io cluster -o jsonpath='{.spec.disableNetworkDiagnostics}'", "true", "oc describe machineconfig container-mount-namespace-and-kubelet-conf-master | grep OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION", "Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\"", "oc get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath=\"{ .data.config\\.yaml }\"", "grafana: enabled: false alertmanagerMain: enabled: false prometheusK8s: retention: 24h", "oc get route -n openshift-monitoring alertmanager-main", "oc get route -n openshift-monitoring grafana", "oc get performanceprofile -o jsonpath=\"{ .items[0].spec.cpu.reserved }\"", "0-3", "siteconfig β”œβ”€β”€ site1-sno-du.yaml β”œβ”€β”€ site2-standard-du.yaml β”œβ”€β”€ extra-manifest/ └── custom-manifest └── 01-example-machine-config.yaml", "clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"site1-sno-du\" namespace: \"site1-sno-du\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.14\" sshPublicKey: \"<ssh_public_key>\" clusters: - clusterName: \"site1-sno-du\" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml", "- clusterName: \"site1-sno-du\" extraManifests: filter: inclusionDefault: exclude", "clusters: - clusterName: \"site1-sno-du\" extraManifestPath: \"<custom_manifest_folder>\" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml", "siteconfig β”œβ”€β”€ site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"cnfdf20\" namespace: \"cnfdf20\" spec: clusters: nodes: - hostname: node6 role: \"worker\" crAnnotations: add: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: true", "get bmh -n <managed-cluster-namespace> <bmh-object> -ojsonpath='{.metadata}' | jq -r '.annotations[\"bmac.agent-install.openshift.io/remove-agent-and-node-on-delete\"]'", "true", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"cnfdf20\" namespace: \"cnfdf20\" spec: clusters: - nodes: - hostName: node6 role: \"worker\" crSuppression: - BareMetalHost", "oc get bmh -n <cluster-ns>", "oc get agent -n <cluster-ns>", "oc get nodes", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14.1 extract /home/ztp --tar | tar x -C ./out", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: additionalKernelArgs: - \"idle=poll\" - \"rcupdate.rcu_normal_after_boot=0\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: \"2-19,22-39\" reserved: \"0-1,20-21\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true", "spec: bindingRules: group-du-standard: \"\" mcp: \"worker\"", "example └── policygentemplates β”œβ”€β”€ dev.yaml β”œβ”€β”€ kustomization.yaml β”œβ”€β”€ mec-edge-sno1.yaml β”œβ”€β”€ sno.yaml └── source-crs 1 β”œβ”€β”€ PaoCatalogSource.yaml β”œβ”€β”€ PaoSubscription.yaml β”œβ”€β”€ custom-crs | β”œβ”€β”€ apiserver-config.yaml | └── disable-nic-lldp.yaml └── elasticsearch β”œβ”€β”€ ElasticsearchNS.yaml └── ElasticsearchOperatorGroup.yaml", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-dev\" namespace: \"ztp-clusters\" spec: bindingRules: dev: \"true\" mcp: \"master\" sourceFiles: # These policies/CRs come from the internal container Image #Cluster Logging - fileName: ClusterLogNS.yaml remediationAction: inform policyName: \"group-dev-cluster-log-ns\" - fileName: ClusterLogOperGroup.yaml remediationAction: inform policyName: \"group-dev-cluster-log-operator-group\" - fileName: ClusterLogSubscription.yaml remediationAction: inform policyName: \"group-dev-cluster-log-sub\" #Local Storage Operator - fileName: StorageNS.yaml remediationAction: inform policyName: \"group-dev-lso-ns\" - fileName: StorageOperGroup.yaml remediationAction: inform policyName: \"group-dev-lso-operator-group\" - fileName: StorageSubscription.yaml remediationAction: inform policyName: \"group-dev-lso-sub\" #These are custom local polices that come from the source-crs directory in the git repo # Performance Addon Operator - fileName: PaoSubscriptionNS.yaml remediationAction: inform policyName: \"group-dev-pao-ns\" - fileName: PaoSubscriptionCatalogSource.yaml remediationAction: inform policyName: \"group-dev-pao-cat-source\" spec: image: <image_URL_here> - fileName: PaoSubscription.yaml remediationAction: inform policyName: \"group-dev-pao-sub\" #Elasticsearch Operator - fileName: elasticsearch/ElasticsearchNS.yaml 1 remediationAction: inform policyName: \"group-dev-elasticsearch-ns\" - fileName: elasticsearch/ElasticsearchOperatorGroup.yaml remediationAction: inform policyName: \"group-dev-elasticsearch-operator-group\" #Custom Resources - fileName: custom-crs/apiserver-config.yaml 2 remediationAction: inform policyName: \"group-dev-apiserver-config\" - fileName: custom-crs/disable-nic-lldp.yaml remediationAction: inform policyName: \"group-dev-disable-nic-lldp\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f cgu-test.yaml", "oc get cgu -A", "NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies", "spec: evaluationInterval: compliant: 30m noncompliant: 20s", "spec: sourceFiles: - fileName: SriovSubscription.yaml policyName: \"sriov-sub-policy\" evaluationInterval: compliant: never noncompliant: 10s", "oc get pods -n open-cluster-management-agent-addon", "NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d", "oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb", "2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-config-policy-config\"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-common-compute-1-catalog-policy-config\"}", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno-validator\" 1 namespace: \"ztp-group\" 2 spec: bindingRules: group-du-sno: \"\" 3 bindingExcludedRules: ztp-done: \"\" 4 mcp: \"master\" 5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform 6 policyName: \"du-policy\" 7", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: true perPodPowerManagement: false", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true [...] additionalKernelArgs: - [...] - \"cpufreq.default_governor=schedutil\" 1", "oc get nodes", "oc debug node/<node-name>", "chroot /host", "cat /proc/cmdline", "- fileName: TunedPerformancePatch.yaml policyName: \"config-policy\" spec: profile: - name: performance-patch data: | [...] [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct=<x> 1", "- fileName: StorageLVMOSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMOSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMOSubscription.yaml spec: name: lvms-operator channel: stable-4.14 policyName: subscription-policies", "- fileName: StorageLVMSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMSubscription.yaml policyName: subscription-policies", "- fileName: StorageLVMCluster.yaml policyName: \"lvms-config\" 1 spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10", "- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043", "- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs", "#AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\"", "- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: \"amqp://amq-router.amq-router.svc.cluster.local\"", "- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs", "- fileName: AmqInstance.yaml policyName: \"config-policy\"", "Bare Metal Event Relay operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"", "- fileName: HardwareEvent.yaml 1 policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043\" logLevel: \"info\"", "oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"", "AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\" Bare Metal Event Rely operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"", "- fileName: AmqInstance.yaml policyName: \"config-policy\"", "- fileName: HardwareEvent.yaml policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local\" 1 logLevel: \"info\"", "oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"", "variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota", "butane storage.bu", "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}", "[...] spec: clusters: - nodes: - ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } [...]", "oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations[\"bmac.agent-install.openshift.io/ignition-config-overrides\"]", "\"{\\\"ignition\\\":{\\\"version\\\":\\\"3.2.0\\\"},\\\"storage\\\":{\\\"disks\\\":[{\\\"device\\\":\\\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\\\",\\\"partitions\\\":[{\\\"label\\\":\\\"var-lib-containers\\\",\\\"sizeMiB\\\":0,\\\"startMiB\\\":250000}],\\\"wipeTable\\\":false}],\\\"filesystems\\\":[{\\\"device\\\":\\\"/dev/disk/by-partlabel/var-lib-containers\\\",\\\"format\\\":\\\"xfs\\\",\\\"mountOptions\\\":[\\\"defaults\\\",\\\"prjquota\\\"],\\\"path\\\":\\\"/var/lib/containers\\\",\\\"wipeFilesystem\\\":true}]},\\\"systemd\\\":{\\\"units\\\":[{\\\"contents\\\":\\\"# Generated by Butane\\\\n[Unit]\\\\nRequires=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\nAfter=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\n\\\\n[Mount]\\\\nWhere=/var/lib/containers\\\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\\\nType=xfs\\\\nOptions=defaults,prjquota\\\\n\\\\n[Install]\\\\nRequiredBy=local-fs.target\\\",\\\"enabled\\\":true,\\\"name\\\":\\\"var-lib-containers.mount\\\"}]}}\"", "oc debug node/my-sno-node", "chroot /host", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk β”œβ”€sda1 8:1 0 1M 0 part β”œβ”€sda2 8:2 0 127M 0 part β”œβ”€sda3 8:3 0 384M 0 part /boot β”œβ”€sda4 8:4 0 243.6G 0 part /var β”‚ /sysroot/ostree/deploy/rhcos/var β”‚ /usr β”‚ /etc β”‚ / β”‚ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers", "df -h", "Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000", "sourceFiles: # storage class - fileName: StorageClass.yaml policyName: \"sc-for-image-registry\" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: \"100\" 1 # persistent volume claim - fileName: StoragePVC.yaml policyName: \"pvc-for-image-registry\" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml 2 policyName: \"pv-for-image-registry\" metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" - fileName: ImageRegistryConfig.yaml policyName: \"config-for-image-registry\" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: storage: pvc: claim: \"image-registry-pvc\"", "cluster=<managed_cluster_name>", "oc get secret -n USDcluster USDcluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-USDcluster", "oc get secret -n USDcluster USDcluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-USDcluster && export KUBECONFIG=./kubeconfig-USDcluster", "oc get image.config.openshift.io cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2021-10-08T19:02:39Z\" generation: 5 name: cluster resourceVersion: \"688678648\" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-ice", "oc get pv image-registry-sc", "oc get pods -n openshift-image-registry | grep registry*", "cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d", "oc debug node/sno-1.example.com", "sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry 1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom", "argocd.argoproj.io/sync-options: Replace=true", "{{hub fromConfigMap \"default\" \"test-config\" \"common-key\" hub}}", "{{hub fromConfigMap \"default\" \"test-config\" (printf \"%s-name\" .ManagedClusterName) hub}}", "{{hub fromConfigMap \"default\" \"test-config\" (printf \"%s-name\" .ManagedClusterName) | toBool hub}}", "{{hub (printf \"%s-name\" .ManagedClusterName) | fromConfigMap \"default\" \"test-config\" | toInt hub}}", "apiVersion: v1 kind: ConfigMap metadata: name: sriovdata namespace: ztp-site annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: example-sno-du_fh-numVfs: \"8\" example-sno-du_fh-pf: ens1f0 example-sno-du_fh-priority: \"10\" example-sno-du_fh-vlan: \"140\" example-sno-du_mh-numVfs: \"8\" example-sno-du_mh-pf: ens3f0 example-sno-du_mh-priority: \"10\" example-sno-du_mh-vlan: \"150\"", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"site\" namespace: \"ztp-site\" spec: remediationAction: inform bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-fh\" spec: resourceName: du_fh vlan: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-vlan\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-fh\" spec: deviceType: netdevice isRdma: true nicSelector: pfNames: - '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-pf\" .ManagedClusterName) | autoindent hub}}' numVfs: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-numVfs\" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-priority\" .ManagedClusterName) | toInt hub}}' resourceName: du_fh - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-mh\" spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-vlan\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-mh\" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: - '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-pf\" .ManagedClusterName) hub}}' numVfs: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-numVfs\" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-priority\" .ManagedClusterName) | toInt hub}}' resourceName: du_mh", "apiVersion: v1 kind: ConfigMap metadata: name: site-data namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: site-1-vlan: \"101\" site-2-vlan: \"234\"", "- fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-mh\" annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"\" \"site-data\" (printf \"%s-vlan\" .ManagedClusterName) | toInt hub}}'", "oc delete policy <policy_name> -n <policy_namespace>", "oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update=\"1\"", "oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f cgr-example.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: \"stable\" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f talm-subscription.yaml", "oc get csv -n openshift-operators", "NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.14.x Topology Aware Lifecycle Manager 4.14.x Succeeded", "oc get deploy -n openshift-operators", "NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s", "spec remediationStrategy: maxConcurrency: 1 timeout: 240", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: 1 addClusterLabels: upgrade-done: \"\" deleteClusterLabels: upgrade-running: \"\" deleteObjects: true beforeEnable: 2 addClusterLabels: upgrade-running: \"\" backup: false clusters: 3 - spoke1 enable: false 4 managedPolicies: 5 - talm-policy preCaching: false remediationStrategy: 6 canaries: 7 - spoke1 maxConcurrency: 2 8 timeout: 240 clusterLabelSelectors: 9 - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: 10 status: 11 computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected 12 - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated 13 - lastTransitionTime: '2022-11-18T16:37:16Z' message: Not enabled reason: NotEnabled status: 'False' type: Progressing managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status:", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 enable: true managedPolicies: - talm-policy preCaching: true remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: status: clusters: - name: spoke1 state: complete computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Remediating non-compliant policies reason: InProgress status: 'True' type: Progressing 1 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: currentBatch: 2 currentBatchRemediationProgress: spoke2: state: Completed spoke3: policyIndex: 0 state: InProgress currentBatchStartedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 - spoke4 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 1 clusters: - name: spoke1 state: complete - name: spoke4 state: complete conditions: - message: All selected clusters are valid reason: ClusterSelectionCompleted status: \"True\" type: ClustersSelected - message: Completed validation reason: ValidationCompleted status: \"True\" type: Validated - message: All clusters are compliant with all the managed policies reason: Completed status: \"False\" type: Progressing 2 - message: All clusters are compliant with all the managed policies reason: Completed status: \"True\" type: Succeeded 3 managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default remediationPlan: - - spoke1 - - spoke4 status: completedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 - spoke2 enable: true managedPolicies: - talm-policy preCaching: false remediationStrategy: maxConcurrency: 2 timeout: 240 status: clusters: - name: spoke1 state: complete - currentPolicy: 1 name: talm-policy status: NonCompliant name: spoke2 state: timedout computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Progressing - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Succeeded 2 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - spoke2 status: startedAt: '2022-11-18T16:27:15Z' completedAt: '2022-11-18T20:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {}", "oc apply -f <name>.yaml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> --type merge -p '{\"spec\":{\"enable\":true}}'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: ocp-4.4.14.4 namespace: platform-upgrade spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: upgrade spec: namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.14 desiredUpdate: version: 4.4.14.4 upstream: https://api.openshift.com/api/upgrades_info/v1/graph status: history: - state: Completed version: 4.4.14.4 remediationAction: inform severity: low remediationAction: inform", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: \"stable\" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 1", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4 batchTimeoutAction: 5", "oc create -f cgu-1.yaml", "oc get cgu --all-namespaces", "NAMESPACE NAME AGE STATE DETAILS default cgu-1 8m55 NotEnabled Not Enabled", "oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq", "{ \"computedMaxConcurrency\": 2, \"conditions\": [ { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Not enabled\", 1 \"reason\": \"NotEnabled\", \"status\": \"False\", \"type\": \"Progressing\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": {} }", "oc get policies -A", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-policy1-common-cluster-version-policy enforce 17m 1 default cgu-policy2-common-nto-sub-policy enforce 17m default cgu-policy3-common-ptp-sub-policy enforce 17m default cgu-policy4-common-sriov-sub-policy enforce 17m default policy1-common-cluster-version-policy inform NonCompliant 15h default policy2-common-nto-sub-policy inform NonCompliant 15h default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 --patch '{\"spec\":{\"enable\":true}}' --type=merge", "oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq", "{ \"computedMaxConcurrency\": 2, \"conditions\": [ 1 { \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"All selected clusters are valid\", \"reason\": \"ClusterSelectionCompleted\", \"status\": \"True\", \"type\": \"ClustersSelected\", \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"Completed validation\", \"reason\": \"ValidationCompleted\", \"status\": \"True\", \"type\": \"Validated\", \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": { \"currentBatch\": 1, \"currentBatchStartedAt\": \"2022-02-25T15:54:16Z\", \"remediationPlanForBatch\": { \"spoke1\": 0, \"spoke2\": 1 }, \"startedAt\": \"2022-02-25T15:54:16Z\" } }", "export KUBECONFIG=<cluster_kubeconfig_absolute_path>", "oc get subs -A | grep -i <subscription_name>", "NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.14.5 True True 43s Working towards 4.4.14.7: 71 of 735 done (9% complete)", "oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath=\"{.status}\"", "oc get installplan -n <subscription_namespace>", "NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1", "oc get csv -n <operator_namespace>", "NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded", "nodes: - hostName: \"node-1.example.com\" role: \"master\" rootDeviceHints: hctl: \"0:2:0:0\" deviceName: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 #Disk /dev/disk/by-id/scsi-3600508b400105e210000900000490000: #893.3 GiB, 959119884288 bytes, 1873281024 sectors diskPartition: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - mount_point: /var/recovery size: 51200 start: 800000", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true backup: true clusters: - cnfdb1 - cnfdb2 enable: true managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f clustergroupupgrades-group-du.yaml", "oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'", "{ \"backup\": { \"clusters\": [ \"cnfdb2\", \"cnfdb1\" ], \"status\": { \"cnfdb1\": \"Succeeded\", \"cnfdb2\": \"Failed\" 1 } }, \"computedMaxConcurrency\": 1, \"conditions\": [ { \"lastTransitionTime\": \"2022-04-05T10:37:19Z\", \"message\": \"Backup failed for 1 cluster\", 2 \"reason\": \"PartiallyDone\", 3 \"status\": \"True\", 4 \"type\": \"Succeeded\" } ], \"precaching\": { \"spec\": {} }, \"status\": {}", "oc delete cgu/du-upgrade-4918 -n ztp-group-du-sno", "ostree admin status", "ostree admin status * rhcos c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9.0 Version: 49.84.202202230006-0 Pinned: yes 1 origin refspec: c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9", "ostree admin status * rhcos f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa.0 Version: 410.84.202204050541-0 origin refspec: f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa rhcos ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca.0 (rollback) 1 Version: 410.84.202203290245-0 Pinned: yes 2 origin refspec: ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca", "rpm-ostree rollback -r", "/var/recovery/upgrade-recovery.sh", "systemctl reboot", "/var/recovery/upgrade-recovery.sh --resume", "/var/recovery/upgrade-recovery.sh --restart", "oc get clusterversion,nodes,clusteroperator", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.4.14.23 True False 86d Cluster version is 4.4.14.23 1 NAME STATUS ROLES AGE VERSION node/lab-test-spoke1-node-0 Ready master,worker 86d v1.22.3+b93fd35 2 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.4.14.23 True False False 2d7h 3 clusteroperator.config.openshift.io/baremetal 4.4.14.23 True False False 86d ...........", "oc adm release info <ocp-version>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-group-upgrade-overrides data: excludePrecachePatterns: | azure 1 aws vsphere alibaba", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f clustergroupupgrades-group-du.yaml", "oc get cgu -A", "NAMESPACE NAME AGE STATE DETAILS ztp-group-du-sno du-upgrade-4918 10s InProgress Precaching is required and not done 1", "oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'", "{ \"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:07:24Z\", \"message\": \"Precaching is required and not done\", \"reason\": \"InProgress\", \"status\": \"False\", \"type\": \"PrecachingSucceeded\" }, { \"lastTransitionTime\": \"2022-01-27T19:07:34Z\", \"message\": \"Pre-caching spec is valid and consistent\", \"reason\": \"PrecacheSpecIsWellFormed\", \"status\": \"True\", \"type\": \"PrecacheSpecValid\" } ], \"precaching\": { \"clusters\": [ \"cnfdb1\" 1 \"cnfdb2\" ], \"spec\": { \"platformImage\": \"image.example.io\"}, \"status\": { \"cnfdb1\": \"Active\" \"cnfdb2\": \"Succeeded\"} } }", "oc get jobs,pods -n openshift-talo-pre-cache", "NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s", "oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'", "\"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:30:41Z\", \"message\": \"The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies\", \"reason\": \"UpgradeCompleted\", \"status\": \"True\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-01-27T19:28:57Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingSucceeded\" 1 }", "oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name>", "oc apply -f <ClusterGroupUpgradeCR_YAML>", "oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}'", "[\"group-du-sno-validator-du-validator-policy\", \"policy2-common-nto-sub-policy\", \"policy3-common-ptp-sub-policy\"]", "oc get policies --all-namespaces", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h", "oc get policies --all-namespaces", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h", "oc get managedclusters", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h", "oc get pod -n openshift-operators", "NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m", "oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager", "ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem", "oc get managedclusters", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2", "oc get managedcluster --selector=upgrade=true 1", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h", "spec: remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchLabels: upgrade: true", "oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}'", "[\"spoke1\", \"spoke3\"]", "oc get managedcluster --selector=upgrade=true", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h", "oc get jobs,pods -n openshift-talo-pre-cache", "oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}'", "{\"maxConcurrency\":2, \"timeout\":240}", "oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}'", "2", "oc get cgu lab-upgrade -ojsonpath='{.status.conditions}'", "{\"lastTransitionTime\":\"2022-02-17T22:25:28Z\", \"message\":\"Missing managed policies:[policyList]\", \"reason\":\"NotAllManagedPoliciesExist\", \"status\":\"False\", \"type\":\"Validated\"}", "oc get cgu lab-upgrade -oyaml", "status: ... copiedPolicies: - lab-upgrade-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy3-common-ptp-sub-policy namespace: default", "oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}'", "[[\"spoke2\", \"spoke3\"]]", "oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager", "ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem", "oc get pods -n openshift-talo-pre-cache", "oc logs -n openshift-talo-pre-cache <pod name>", "oc describe pod -n openshift-talo-pre-cache <pod name>", "oc describe job -n openshift-talo-pre-cache pre-cache", "imageContentSources: - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "OCP_RELEASE_NUMBER=<release_version>", "ARCHITECTURE=<cluster_architecture> 1", "DIGEST=\"USD(oc adm release info quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_NUMBER}-USD{ARCHITECTURE} | sed -n 's/Pull From: .*@//p')\"", "DIGEST_ALGO=\"USD{DIGEST%%:*}\"", "DIGEST_ENCODED=\"USD{DIGEST#*:}\"", "SIGNATURE_BASE64=USD(curl -s \"https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/USD{DIGEST_ALGO}=USD{DIGEST_ENCODED}/signature-1\" | base64 -w0 && echo)", "cat >checksum-USD{OCP_RELEASE_NUMBER}.yaml <<EOF USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} EOF", "curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.14 -o ~/upgrade-graph_stable-4.14", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: ImageSignature.yaml 1 policyName: \"platform-upgrade-prep\" binaryData: USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} 2 - fileName: DisconnectedICSP.yaml policyName: \"platform-upgrade-prep\" metadata: name: disconnected-internal-icsp-for-ocp spec: repositoryDigestMirrors: 3 - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-release - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - fileName: ClusterVersion.yaml 4 policyName: \"platform-upgrade\" metadata: name: version spec: channel: \"stable-4.14\" upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.14 desiredUpdate: version: 4.14.4 status: history: - version: 4.14.4 state: \"Completed\"", "oc get policies -A | grep platform-upgrade", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-platform-upgrade preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-platform-upgrade.yml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}'", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v4.14 1 updateStrategy: 2 registryPoll: interval: 1h status: connectionState: lastObservedState: READY 3", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: ... - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"fec-catsrc-policy\" metadata: name: certified-operators spec: displayName: Intel SRIOV-FEC Operator image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10 updateStrategy: registryPoll: interval: 10m - fileName: AcceleratorsSubscription.yaml policyName: \"subscriptions-fec-policy\" spec: channel: \"stable\" source: certified-operators", "oc get policies -A | grep -E \"catsrc-policy|subscription\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade-prep namespace: default spec: clusters: - spoke1 enable: true managedPolicies: - du-upgrade-operator-catsrc-policy remediationStrategy: maxConcurrency: 1", "oc apply -f cgu-operator-upgrade-prep.yml", "oc get policies -A | grep -E \"catsrc-policy\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade namespace: default spec: managedPolicies: - du-upgrade-operator-catsrc-policy 1 - common-subscriptions-policy 2 preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-operator-upgrade.yml", "oc get policy common-subscriptions-policy -n <policy_namespace>", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27d", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}'", "oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq", "[ { \"lastTransitionTime\": \"2022-03-08T20:49:08.000Z\", \"message\": \"The ClusterGroupUpgrade CR is not enabled\", \"reason\": \"UpgradeNotStarted\", \"status\": \"False\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-03-08T20:55:30.000Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingDone\" } ]", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "- fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v{product-version} updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected-v2 1 spec: displayName: Red Hat Operators Catalog v2 2 image: registry.example.com:5000/olm/redhat-operators-disconnected:<version> 3 updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: operator-subscription namespace: operator-namspace spec: source: redhat-operators-disconnected-v2 1", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-operator-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-operator-catsrc-policy clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 10 enable: true", "oc apply -f cgu-platform-operator-upgrade-prep.yml", "oc get policies --all-namespaces", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-du-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade 1 - du-upgrade-operator-catsrc-policy 2 - common-subscriptions-policy 3 preCaching: true clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-platform-operator-upgrade.yml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get jobs,pods -n openshift-talm-pre-cache", "oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}'", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "- fileName: PaoSubscriptionNS.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscription.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave", "oc get policy -n ztp-common common-subscriptions-policy", "apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: exampleconfig namespace: exampleconfig-ns spec: overrides: 1 platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable spaceRequired: 30 Gi 2 excludePrecachePatterns: 3 - aws - vsphere additionalImages: 4 - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu spec: preCaching: true 1 preCachingConfigRef: name: exampleconfig 2 namespace: exampleconfig-ns 3", "apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: exampleconfig namespace: default 1 spec: [...] spaceRequired: 30Gi 2 additionalImages: - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu namespace: default spec: clusters: - sno1 - sno2 preCaching: true preCachingConfigRef: - name: exampleconfig namespace: default managedPolicies: - du-upgrade-platform-upgrade - du-upgrade-operator-catsrc-policy - common-subscriptions-policy remediationStrategy: timeout: 240", "oc apply -f cgu.yaml", "oc get cgu <cgu_name> -n <cgu_namespace> -oyaml", "precaching: spec: platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable excludePrecachePatterns: - aws - vsphere additionalImages: - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09 spaceRequired: \"30\" status: sno1: Starting sno2: Starting", "- lastTransitionTime: \"2023-01-01T00:00:01Z\" message: All selected clusters are valid reason: ClusterSelectionCompleted status: \"True\" type: ClusterSelected - lastTransitionTime: \"2023-01-01T00:00:02Z\" message: Completed validation reason: ValidationCompleted status: \"True\" type: Validated - lastTransitionTime: \"2023-01-01T00:00:03Z\" message: Precaching spec is valid and consistent reason: PrecacheSpecIsWellFormed status: \"True\" type: PrecacheSpecValid - lastTransitionTime: \"2023-01-01T00:00:04Z\" message: Precaching in progress for 1 clusters reason: InProgress status: \"False\" type: PrecachingSucceeded", "Type: \"PrecacheSpecValid\" Status: False, Reason: \"PrecacheSpecIncomplete\" Message: \"Precaching spec is incomplete: failed to get PreCachingConfig resource due to PreCachingConfig.ran.openshift.io \"<pre-caching_cr_name>\" not found\"", "oc get jobs -n openshift-talo-pre-cache", "NAME COMPLETIONS DURATION AGE pre-cache 0/1 1s 1s", "oc describe pod pre-cache -n openshift-talo-pre-cache", "Type Reason Age From Message Normal SuccesfulCreate 19s job-controller Created pod: pre-cache-abcd1", "oc logs -f pre-cache-abcd1 -n openshift-talo-pre-cache", "oc describe pod pre-cache -n openshift-talo-pre-cache", "Type Reason Age From Message Normal SuccesfulCreate 5m19s job-controller Created pod: pre-cache-abcd1 Normal Completed 19s job-controller Job completed", "oc debug node/cnfdf00.example.lab", "chroot /host/", "sudo podman images | grep <operator_name>", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: generation: 1 name: spoke1 namespace: ztp-install ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1 blockOwnerDeletion: true controller: true kind: ManagedCluster name: spoke1 uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5 resourceVersion: \"46666836\" uid: b8be9cd2-764f-4a62-87d6-6b767852c7da spec: actions: afterCompletion: addClusterLabels: ztp-done: \"\" 1 deleteClusterLabels: ztp-running: \"\" deleteObjects: true beforeEnable: addClusterLabels: ztp-running: \"\" 2 clusters: - spoke1 enable: true managedPolicies: - common-spoke1-config-policy - common-spoke1-subscriptions-policy - group-spoke1-config-policy - spoke1-config-policy - group-spoke1-validator-du-policy preCaching: false remediationStrategy: maxConcurrency: 1 timeout: 240", "oc get ptpoperatorconfig/default -n openshift-ptp -ojsonpath='{.spec}' | jq", "{\"daemonNodeSelector\":{\"node-role.kubernetes.io/master\":\"\"}} 1", "oc get sriovoperatorconfig/default -n openshift-sriov-network-operator -ojsonpath='{.spec}' | jq", "{\"configDaemonNodeSelector\":{\"node-role.kubernetes.io/worker\":\"\"},\"disableDrain\":false,\"enableInjector\":true,\"enableOperatorWebhook\":true} 1", "spec: - fileName: PtpOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" - fileName: SriovOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: \"\"", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-sno-workers\" namespace: \"example-sno\" spec: bindingRules: sites: \"example-sno\" 1 mcp: \"worker\" 2 sourceFiles: - fileName: MachineConfigGeneric.yaml 3 policyName: \"config-policy\" metadata: labels: machineconfiguration.openshift.io/role: worker name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-worker-node-performance-profile spec: cpu: 4 isolated: \"4-47\" reserved: \"0-3\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true - fileName: TunedPerformancePatch.yaml policyName: \"config-policy\" metadata: name: performance-patch-worker spec: profile: - name: performance-patch-worker data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-47 5 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - profile: performance-patch-worker", "cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF", "nodes: - hostName: \"example-node2.example.com\" role: \"worker\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node2-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" bootMode: \"UEFI\" nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up macAddress: \"AA:BB:CC:DD:EE:11\" ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "apiVersion: v1 data: password: \"password\" username: \"username\" kind: Secret metadata: name: \"example-node2-bmh-secret\" namespace: example-sno type: Opaque", "oc get ppimg -n example-sno", "NAMESPACE NAME READY REASON example-sno example-sno True ImageCreated example-sno example-node2 True ImageCreated", "oc get bmh -n example-sno", "NAME STATE CONSUMER ONLINE ERROR AGE example-sno provisioned true 69m example-node2 provisioning true 4m50s 1", "oc get agent -n example-sno --watch", "NAME CLUSTER APPROVED ROLE STAGE 671bc05d-5358-8940-ec12-d9ad22804faa example-sno true master Done [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Starting installation 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Installing 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Writing image to disk [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Waiting for control plane [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Rebooting 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Done", "oc get managedclusterinfo/example-sno -n example-sno -o jsonpath='{range .status.nodeList[*]}{.name}{\"\\t\"}{.conditions}{\"\\t\"}{.labels}{\"\\n\"}{end}'", "example-sno [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/master\":\"\",\"node-role.kubernetes.io/worker\":\"\"} example-node2 [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/worker\":\"\"}", "podman pull quay.io/openshift-kni/telco-ran-tools:latest", "podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -v", "factory-precaching-cli version 20221018.120852+main.feecf17", "curl --globoff -H \"Content-Type: application/json\" -H \"Accept: application/json\" -k -X GET --user USD{username_password} https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.tool", "curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Image\": \"http://[USDHTTPd_IP]/RHCOS-live.iso\"}' -X POST https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMedia", "curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Boot\":{ \"BootSourceOverrideEnabled\": \"Once\", \"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\"}}' -X PATCH https://USDBMC_ADDRESS/redfish/v1/Systems/Self", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk", "wipefs -a /dev/nvme0n1", "/dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa", "podman run -v /dev:/dev --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli partition \\ 1 -d /dev/nvme0n1 \\ 2 -s 250 3", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:3 0 250G 0 part", "gdisk -l /dev/nvme0n1", "GPT fdisk (gdisk) version 1.0.3 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB Model: Dell Express Flash PM1725b 1.6TB SFF Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3125627534 Partitions will be aligned on 2048-sector boundaries Total free space is 2601338846 sectors (1.2 TiB) Number Start (sector) End (sector) Size Code Name 1 2601338880 3125627534 250.0 GiB 8300 data", "lsblk -f /dev/nvme0n1", "NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n1 └─nvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071", "mount /dev/nvme0n1p1 /mnt/", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:2 0 250G 0 part /var/mnt 1", "taskset 0xffffffff podman run --rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download --help", "oc get csv -A | grep -i advanced-cluster-management", "open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded", "oc get csv -A | grep -i multicluster-engine", "multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 Succeeded", "mkdir /root/.docker", "cp config.json /root/.docker/config.json 1", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- factory-precaching-cli download \\ 1 -r 4.14.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6", "Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5 Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06 Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995 Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1 Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8 Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f Summary: Release: 4.14.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: No Workers: 83", "ls -l /mnt 1", "-rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.14.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s 7", "Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958 Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99 Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0 Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3 Summary: Release: 4.14.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: Yes Workers: 83", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.14.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --generate-imageset 8", "Generated /mnt/imageset.yaml", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: channels: - name: stable-4.14 minVersion: 4.14.0 1 maxVersion: 4.14.0 additionalImages: - name: quay.io/custom/repository operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: advanced-cluster-management 2 channels: - name: 'release-2.6' minVersion: 2.6.3 maxVersion: 2.6.3 - name: multicluster-engine 3 channels: - name: 'stable-2.1' minVersion: 2.1.4 maxVersion: 2.1.4 - name: local-storage-operator 4 channels: - name: 'stable' - name: ptp-operator 5 channels: - name: 'stable' - name: sriov-network-operator 6 channels: - name: 'stable' - name: cluster-logging 7 channels: - name: 'stable' - name: lvms-operator 8 channels: - name: 'stable-4.14' - name: amq7-interconnect-operator 9 channels: - name: '1.10.x' - name: bare-metal-event-relay 10 channels: - name: 'stable' - catalog: registry.redhat.io/redhat/certified-operator-index:v4.14 packages: - name: sriov-fec 11 channels: - name: 'stable'", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: [...] operators: - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.14 packages: - name: sriov-fec channels: - name: 'stable'", "cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.", "update-ca-trust", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.14.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --skip-imageset 8", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.14.0 --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt --img quay.io/custom/repository --du-profile -s --skip-imageset", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-5g-lab\" namespace: \"example-5g-lab\" spec: baseDomain: \"example.domain.redhat.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"img4.9.10-x86-64-appsub\" 1 sshPublicKey: \"ssh-rsa ...\" clusters: - clusterName: \"sno-worker-0\" clusterImageSetNameRef: \"eko4-img4.11.5-x86-64-appsub\" 2 clusterLabels: group-du-sno: \"\" common-411: true sites : \"example-5g-lab\" vendor: \"OpenShift\" clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.19.32.192/26 serviceNetwork: - 172.30.0.0/16 networkType: \"OVNKubernetes\" additionalNTPSources: - clock.corp.redhat.com ignitionConfigOverride: '{ \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-images.service\\nBindsTo=precache-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-images.service\" }, { \"name\": \"precache-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached images in discovery stage\\nAfter=var-mnt.mount\\nBefore=agent.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ai.sh\\n#TimeoutStopSec=30\\n\\n[Install]\\nWantedBy=multi-user.target default.target\\nWantedBy=agent.service\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ai.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200\" } }, { \"overwrite\": true, \"path\": \"/usr/local/bin/agent-fix-bz1964591\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true\" } } ] } }' nodes: - hostName: \"snonode.sno-worker-0.example.domain.redhat.com\" role: \"master\" bmcAddress: \"idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"worker0-bmh-secret\" bootMACAddress: \"e4:43:4b:bd:90:46\" bootMode: \"UEFI\" rootDeviceHints: deviceName: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 installerArgs: '[\"--save-partlabel\", \"data\"]' ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-ocp-images.service\\nBindsTo=precache-ocp-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-ocp-images.service\" }, { \"name\": \"precache-ocp-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached OCP images into containers storage\\nAfter=var-mnt.mount\\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ocp.sh\\nTimeoutStopSec=60\\n\\n[Install]\\nWantedBy=multi-user.target\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ocp.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200\" } } ] } } nodeNetwork: config: interfaces: - name: ens1f0 type: ethernet state: up macAddress: \"AA:BB:CC:11:22:33\" ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"ens1f0\" macAddress: \"AA:BB:CC:11:22:33\"", "OPTIONS: -u, --image-url <URL> Manually specify the image URL -f, --image-file <path> Manually specify a local image file -i, --ignition-file <path> Embed an Ignition config from a file -I, --ignition-url <URL> Embed an Ignition config from a URL --save-partlabel <lx> Save partitions with this label glob --save-partindex <id> Save partitions with this number or range --insecure-ignition Allow Ignition URL without HTTPS or hash", "Generating list of pre-cached artifacts error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2 Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures backend is not configured in /mnt/imageset.yaml, using stateless mode backend is not configured in /mnt/imageset.yaml, using stateless mode No metadata detected, creating new workspace level=info msg=trying next host error=failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443 The rendered catalog is invalid. Run \"oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME\" for more information. error: error rendering new refs: render reference \"eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11\": error resolving name : failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority", "cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.", "update-ca-trust", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.14.0 --acm-version 2.5.4 --mce-version 2.0.4 -f /mnt \\--img quay.io/custom/repository --du-profile -s --skip-imageset" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/scalability_and_performance/index
Chapter 1. Key features
Chapter 1. Key features An open standard protocol - AMQP 1.0 Industry-standard APIs - JMS 1.1 and 2.0 New event-driven APIs - Fast, efficient messaging that integrates everywhere Broad language support - C++, Java, JavaScript, Python, Ruby, and .NET Wide availability - Linux, Windows, and JVM-based environments
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/amq_clients_overview/key_features
Chapter 3. Creating applications
Chapter 3. Creating applications 3.1. Creating applications by using the Developer perspective The Developer perspective in the web console provides you the following options from the +Add view to create applications and associated services and deploy them on OpenShift Container Platform: Getting started resources : Use these resources to help you get started with Developer Console. You can choose to hide the header using the Options menu . Creating applications using samples : Use existing code samples to get started with creating applications on the OpenShift Container Platform. Build with guided documentation : Follow the guided documentation to build applications and familiarize yourself with key concepts and terminologies. Explore new developer features : Explore the new features and resources within the Developer perspective. Developer catalog : Explore the Developer Catalog to select the required applications, services, or source to image builders, and then add it to your project. All Services : Browse the catalog to discover services across OpenShift Container Platform. Database : Select the required database service and add it to your application. Operator Backed : Select and deploy the required Operator-managed service. Helm chart : Select the required Helm chart to simplify deployment of applications and services. Devfile : Select a devfile from the Devfile registry to declaratively define a development environment. Event Source : Select an event source to register interest in a class of events from a particular system. Note The Managed services option is also available if the RHOAS Operator is installed. Git repository : Import an existing codebase, Devfile, or Dockerfile from your Git repository using the From Git , From Devfile , or From Dockerfile options respectively, to build and deploy an application on OpenShift Container Platform. Container images : Use existing images from an image stream or registry to deploy it on to the OpenShift Container Platform. Pipelines : Use Tekton pipeline to create CI/CD pipelines for your software delivery process on the OpenShift Container Platform. Serverless : Explore the Serverless options to create, build, and deploy stateless and serverless applications on the OpenShift Container Platform. Channel : Create a Knative channel to create an event forwarding and persistence layer with in-memory and reliable implementations. Samples : Explore the available sample applications to create, build, and deploy an application quickly. Quick Starts : Explore the quick start options to create, import, and run applications with step-by-step instructions and tasks. From Local Machine : Explore the From Local Machine tile to import or upload files on your local machine for building and deploying applications easily. Import YAML : Upload a YAML file to create and define resources for building and deploying applications. Upload JAR file : Upload a JAR file to build and deploy Java applications. Share my Project : Use this option to add or remove users to a project and provide accessibility options to them. Helm Chart repositories : Use this option to add Helm Chart repositories in a namespace. Re-ordering of resources : Use these resources to re-order pinned resources added to your navigation pane. The drag-and-drop icon is displayed on the left side of the pinned resource when you hover over it in the navigation pane. The dragged resource can be dropped only in the section where it resides. Note that certain options, such as Pipelines , Event Source , and Import Virtual Machines , are displayed only when the OpenShift Pipelines Operator , OpenShift Serverless Operator , and OpenShift Virtualization Operator are installed, respectively. 3.1.1. Prerequisites To create applications using the Developer perspective ensure that: You have logged in to the web console . You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. To create serverless applications, in addition to the preceding prerequisites, ensure that: You have installed the OpenShift Serverless Operator . You have created a KnativeServing resource in the knative-serving namespace . 3.1.2. Creating sample applications You can use the sample applications in the +Add flow of the Developer perspective to create, build, and deploy applications quickly. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click the Samples tile to see the Samples page. On the Samples page, select one of the available sample applications to see the Create Sample Application form. In the Create Sample Application Form : In the Name field, the deployment name is displayed by default. You can modify this name as required. In the Builder Image Version , a builder image is selected by default. You can modify this image version by using the Builder Image Version drop-down list. A sample Git repository URL is added by default. Click Create to create the sample application. The build status of the sample application is displayed on the Topology view. After the sample application is created, you can see the deployment added to the application. 3.1.3. Creating applications by using Quick Starts The Quick Starts page shows you how to create, import, and run applications on OpenShift Container Platform, with step-by-step instructions and tasks. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click the Getting Started resources Build with guided documentation View all quick starts link to view the Quick Starts page. In the Quick Starts page, click the tile for the quick start that you want to use. Click Start to begin the quick start. Perform the steps that are displayed. 3.1.4. Importing a codebase from Git to create an application You can use the Developer perspective to create, build, and deploy an application on OpenShift Container Platform using an existing codebase in GitHub. The following procedure walks you through the From Git option in the Developer perspective to create an application. Procedure In the +Add view, click From Git in the Git Repository tile to see the Import from git form. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. For example, enter the URL of this sample Node.js application https://github.com/sclorg/nodejs-ex . The URL is then validated. Optional: You can click Show Advanced Git Options to add details such as: Git Reference to point to code in a specific branch, tag, or commit to be used to build the application. Context Dir to specify the subdirectory for the application source code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. Optional: You can import a Devfile , a Dockerfile , Builder Image , or a Serverless Function through your Git repository to further customize your deployment. If your Git repository contains a Devfile , a Dockerfile , a Builder Image , or a func.yaml , it is automatically detected and populated on the respective path fields. If a Devfile , a Dockerfile , or a Builder Image are detected in the same repository, the Devfile is selected by default. If func.yaml is detected in the Git repository, the Import Strategy changes to Serverless Function . Alternatively, you can create a serverless function by clicking Create Serverless function in the +Add view using the Git repository URL. To edit the file import type and select a different strategy, click Edit import strategy option. If multiple Devfiles , a Dockerfiles , or a Builder Images are detected, to import a specific instance, specify the respective paths relative to the context directory. After the Git URL is validated, the recommended builder image is selected and marked with a star. If the builder image is not auto-detected, select a builder image. For the https://github.com/sclorg/nodejs-ex Git URL, by default the Node.js builder image is selected. Optional: Use the Builder Image Version drop-down to specify a version. Optional: Use the Edit import strategy to select a different strategy. Optional: For the Node.js builder image, use the Run command field to override the command to run the application. In the General section: In the Application field, enter a unique name for the application grouping, for example, myapp . Ensure that the application name is unique in a namespace. The Name field to identify the resources created for this application is automatically populated based on the Git repository URL if there are no existing applications. If there are existing applications, you can choose to deploy the component within an existing application, create a new application, or keep the component unassigned. Note The resource name must be unique in a namespace. Modify the resource name if you get an error. In the Resources section, select: Deployment , to create an application in plain Kubernetes style. Deployment Config , to create an OpenShift Container Platform style application. Serverless Deployment , to create a Knative service. Note To set the default resource preference for importing an application, go to User Preferences Applications Resource type field. The Serverless Deployment option is displayed in the Import from Git form only if the OpenShift Serverless Operator is installed in your cluster. The Resources section is not available while creating a serverless function. For further details, refer to the OpenShift Serverless documentation. In the Pipelines section, select Add Pipeline , and then click Show Pipeline Visualization to see the pipeline for the application. A default pipeline is selected, but you can choose the pipeline you want from the list of available pipelines for the application. Note The Add pipeline checkbox is checked and Configure PAC is selected by default if the following criterias are fulfilled: Pipeline operator is installed pipelines-as-code is enabled .tekton directory is detected in the Git repository Add a webhook to your repository. If Configure PAC is checked and the GitHub App is set up, you can see the Use GitHub App and Setup a webhook options. If GitHub App is not set up, you can only see the Setup a webhook option: Go to Settings Webhooks and click Add webhook . Set the Payload URL to the Pipelines as Code controller public URL. Select the content type as application/json . Add a webhook secret and note it in an alternate location. With openssl installed on your local machine, generate a random secret. Click Let me select individual events and select these events: Commit comments , Issue comments , Pull request , and Pushes . Click Add webhook . Optional: In the Advanced Options section, the Target port and the Create a route to the application is selected by default so that you can access your application using a publicly available URL. If your application does not expose its data on the default public port, 80, clear the check box, and set the target port number you want to expose. Optional: You can use the following advanced options to further customize your application: Routing By clicking the Routing link, you can perform the following actions: Customize the hostname for the route. Specify the path the router watches. Select the target port for the traffic from the drop-down list. Secure your route by selecting the Secure Route check box. Select the required TLS termination type and set a policy for insecure traffic from the respective drop-down lists. Note For serverless applications, the Knative service manages all the routing options above. However, you can customize the target port for traffic, if required. If the target port is not specified, the default port of 8080 is used. Domain mapping If you are creating a Serverless Deployment , you can add a custom domain mapping to the Knative service during creation. In the Advanced options section, click Show advanced Routing options . If the domain mapping CR that you want to map to the service already exists, you can select it from the Domain mapping drop-down menu. If you want to create a new domain mapping CR, type the domain name into the box, and select the Create option. For example, if you type in example.com , the Create option is Create "example.com" . Health Checks Click the Health Checks link to add Readiness, Liveness, and Startup probes to your application. All the probes have prepopulated default data; you can add the probes with the default data or customize it as required. To customize the health probes: Click Add Readiness Probe , if required, modify the parameters to check if the container is ready to handle requests, and select the check mark to add the probe. Click Add Liveness Probe , if required, modify the parameters to check if a container is still running, and select the check mark to add the probe. Click Add Startup Probe , if required, modify the parameters to check if the application within the container has started, and select the check mark to add the probe. For each of the probes, you can specify the request type - HTTP GET , Container Command , or TCP Socket , from the drop-down list. The form changes as per the selected request type. You can then modify the default values for the other parameters, such as the success and failure thresholds for the probe, number of seconds before performing the first probe after the container starts, frequency of the probe, and the timeout value. Build Configuration and Deployment Click the Build Configuration and Deployment links to see the respective configuration options. Some options are selected by default; you can customize them further by adding the necessary triggers and environment variables. For serverless applications, the Deployment option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a DeploymentConfig resource. Scaling Click the Scaling link to define the number of pods or instances of the application you want to deploy initially. If you are creating a serverless deployment, you can also configure the following settings: Min Pods determines the lower limit for the number of pods that must be running at any given time for a Knative service. This is also known as the minScale setting. Max Pods determines the upper limit for the number of pods that can be running at any given time for a Knative service. This is also known as the maxScale setting. Concurrency target determines the number of concurrent requests desired for each instance of the application at a given time. Concurrency limit determines the limit for the number of concurrent requests allowed for each instance of the application at a given time. Concurrency utilization determines the percentage of the concurrent requests limit that must be met before Knative scales up additional pods to handle additional traffic. Autoscale window defines the time window over which metrics are averaged to provide input for scaling decisions when the autoscaler is not in panic mode. A service is scaled-to-zero if no requests are received during this window. The default duration for the autoscale window is 60s . This is also known as the stable window. Resource Limit Click the Resource Limit link to set the amount of CPU and Memory resources a container is guaranteed or allowed to use when running. Labels Click the Labels link to add custom labels to your application. Click Create to create the application and a success notification is displayed. You can see the build status of the application in the Topology view. 3.1.5. Deploying a Java application by uploading a JAR file You can use the web console Developer perspective to upload a JAR file by using the following options: Navigate to the +Add view of the Developer perspective, and click Upload JAR file in the From Local Machine tile. Browse and select your JAR file, or drag a JAR file to deploy your application. Navigate to the Topology view and use the Upload JAR file option, or drag a JAR file to deploy your application. Use the in-context menu in the Topology view, and then use the Upload JAR file option to upload your JAR file to deploy your application. Prerequisites The Cluster Samples Operator must be installed by a cluster administrator. You have access to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the Topology view, right-click anywhere to view the Add to Project menu. Hover over the Add to Project menu to see the menu options, and then select the Upload JAR file option to see the Upload JAR file form. Alternatively, you can drag the JAR file into the Topology view. In the JAR file field, browse for the required JAR file on your local machine and upload it. Alternatively, you can drag the JAR file on to the field. A toast alert is displayed at the top right if an incompatible file type is dragged into the Topology view. A field error is displayed if an incompatible file type is dropped on the field in the upload form. The runtime icon and builder image are selected by default. If a builder image is not auto-detected, select a builder image. If required, you can change the version using the Builder Image Version drop-down list. Optional: In the Application Name field, enter a unique name for your application to use for resource labelling. In the Name field, enter a unique component name for the associated resources. Optional: Using the Advanced options Resource type drop-down list, select a different resource type from the list of default resource types. In the Advanced options menu, click Create a Route to the Application to configure a public URL for your deployed application. Click Create to deploy the application. A toast notification is shown to notify you that the JAR file is being uploaded. The toast notification also includes a link to view the build logs. Note If you attempt to close the browser tab while the build is running, a web alert is displayed. After the JAR file is uploaded and the application is deployed, you can view the application in the Topology view. 3.1.6. Using the Devfile registry to access devfiles You can use the devfiles in the +Add flow of the Developer perspective to create an application. The +Add flow provides a complete integration with the devfile community registry . A devfile is a portable YAML file that describes your development environment without needing to configure it from scratch. Using the Devfile registry , you can use a preconfigured devfile to create an application. Procedure Navigate to Developer Perspective +Add Developer Catalog All Services . A list of all the available services in the Developer Catalog is displayed. Under Type , click Devfiles to browse for devfiles that support a particular language or framework. Alternatively, you can use the keyword filter to search for a particular devfile using their name, tag, or description. Click the devfile you want to use to create an application. The devfile tile displays the details of the devfile, including the name, description, provider, and the documentation of the devfile. Click Create to create an application and view the application in the Topology view. 3.1.7. Using the Developer Catalog to add services or components to your application You use the Developer Catalog to deploy applications and services based on Operator backed services such as Databases, Builder Images, and Helm Charts. The Developer Catalog contains a collection of application components, services, event sources, or source-to-image builders that you can add to your project. Cluster administrators can customize the content made available in the catalog. Procedure In the Developer perspective, navigate to the +Add view and from the Developer Catalog tile, click All Services to view all the available services in the Developer Catalog . Under All Services , select the kind of service or the component you need to add to your project. For this example, select Databases to list all the database services and then click MariaDB to see the details for the service. Click Instantiate Template to see an automatically populated template with details for the MariaDB service, and then click Create to create and view the MariaDB service in the Topology view. Figure 3.1. MariaDB in Topology 3.1.8. Additional resources For more information about Knative routing settings for OpenShift Serverless, see Routing . For more information about domain mapping settings for OpenShift Serverless, see Configuring a custom domain for a Knative service . For more information about Knative autoscaling settings for OpenShift Serverless, see Autoscaling . For more information about adding a new user to a project, see Working with projects . For more information about creating a Helm Chart repository, see Creating Helm Chart repositories . 3.2. Creating applications from installed Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. You can create applications on OpenShift Container Platform using Operators that have been installed by a cluster administrator. This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. Additional resources See the Operators guide for more on how Operators work and how the Operator Lifecycle Manager is integrated in OpenShift Container Platform. 3.2.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.13 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.3. Creating applications by using the CLI You can create an OpenShift Container Platform application from components that include source or binary code, images, and templates by using the OpenShift Container Platform CLI. The set of objects created by new-app depends on the artifacts passed as input: source repositories, images, or templates. 3.3.1. Creating an application from source code With the new-app command you can create applications from source code in a local or remote Git repository. The new-app command creates a build configuration, which itself creates a new application image from your source code. The new-app command typically also creates a Deployment object to deploy the new image, and a service to provide load-balanced access to the deployment running your image. OpenShift Container Platform automatically detects whether the pipeline, source, or docker build strategy should be used, and in the case of source build, detects an appropriate language builder image. 3.3.1.1. Local To create an application from a Git repository in a local directory: USD oc new-app /<path to source code> Note If you use a local Git repository, the repository must have a remote named origin that points to a URL that is accessible by the OpenShift Container Platform cluster. If there is no recognized remote, running the new-app command will create a binary build. 3.3.1.2. Remote To create an application from a remote Git repository: USD oc new-app https://github.com/sclorg/cakephp-ex To create an application from a private remote Git repository: USD oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret Note If you use a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your build config to access the repository. You can use a subdirectory of your source code repository by specifying a --context-dir flag. To create an application from a remote Git repository and a context subdirectory: USD oc new-app https://github.com/sclorg/s2i-ruby-container.git \ --context-dir=2.0/test/puma-test-app Also, when specifying a remote URL, you can specify a Git branch to use by appending #<branch_name> to the end of the URL: USD oc new-app https://github.com/openshift/ruby-hello-world.git#beta4 3.3.1.3. Build strategy detection OpenShift Container Platform automatically determines which build strategy to use by detecting certain files: If a Jenkins file exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a pipeline build strategy. Note The pipeline build strategy is deprecated; consider using Red Hat OpenShift Pipelines instead. If a Dockerfile exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a docker build strategy. If neither a Jenkins file nor a Dockerfile is detected, OpenShift Container Platform generates a source build strategy. Override the automatically detected build strategy by setting the --strategy flag to docker , pipeline , or source . USD oc new-app /home/user/code/myapp --strategy=docker Note The oc command requires that files containing build sources are available in a remote Git repository. For all source builds, you must use git remote -v . 3.3.1.4. Language detection If you use the source build strategy, new-app attempts to determine the language builder to use by the presence of certain files in the root or specified context directory of the repository: Table 3.1. Languages detected by new-app Language Files dotnet project.json , *.csproj jee pom.xml nodejs app.json , package.json perl cpanfile , index.pl php composer.json , index.php python requirements.txt , setup.py ruby Gemfile , Rakefile , config.ru scala build.sbt golang Godeps , main.go After a language is detected, new-app searches the OpenShift Container Platform server for image stream tags that have a supports annotation matching the detected language, or an image stream that matches the name of the detected language. If a match is not found, new-app searches the Docker Hub registry for an image that matches the detected language based on name. You can override the image the builder uses for a particular source repository by specifying the image, either an image stream or container specification, and the repository with a ~ as a separator. Note that if this is done, build strategy detection and language detection are not carried out. For example, to use the myproject/my-ruby imagestream with the source in a remote repository: USD oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git To use the openshift/ruby-20-centos7:latest container image stream with the source in a local repository: USD oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app Note Language detection requires the Git client to be locally installed so that your repository can be cloned and inspected. If Git is not available, you can avoid the language detection step by specifying the builder image to use with your repository with the  </images> </configuration> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> <goal>apply</goal> </goals> </execution> </executions> </plugin> 290.2.4.1. Deploying on Openshift Once the openshift-maven-plugin is configured in pom.xml , you can import the fuse spring-boot image into a specific namespace as a builder image for our application. Start in your application path: Create the project streams: Import the image streams: Create your project Deploy the application with maven: 290.3. URI format There are two different kinds of endpoint provided by the SAP component: the Remote Function Call (RFC) endpoints, and the Intermediate Document (IDoc) endpoints. The URI formats for the RFC endpoints are as follows: The URI formats for the IDoc endpoints are as follows: The URI formats prefixed by sap- endpointKind -destination define destination endpoints (in other words, Camel producer endpoints), and destinationName is the name of a specific outbound connection to a SAP instance. Outbound connections are named and configured at the component level, as described in Section 290.6.2, "Destination Configuration" . The URI formats prefixed by sap- endpointKind -server define server endpoints (in other words, Camel consumer endpoints) and serverName is the name of a specific inbound connection from a SAP instance. Inbound connections are named and configured at the component level, as described in Section 290.6.3, "Server Configuration" . The other components of an RFC endpoint URI are as follows: rfcName (Required) In a destination endpoint URI, the name of the RFC invoked by the endpoint in the connected SAP instance. In a server endpoint URI, the name of the RFC handled by the endpoint when invoked from the connected SAP instance. queueName Specifies the queue this endpoint sends a SAP request to. The other components of an IDoc endpoint URI are as follows: idocType (Required) Specifies the Basic IDoc Type of an IDoc produced by this endpoint. idocTypeExtension Specifies the IDoc Type Extension, if any, of an IDoc produced by this endpoint. systemRelease Specifies the associated SAP Basis Release, if any, of an IDoc produced by this endpoint. applicationRelease Specifes the associated Application Release, if any, of an IDoc produced by this endpoint. queueName Specifies the queue this endpoint sends a SAP request to. 290.4. Options 290.4.1. Options for RFC destination endpoints The RFC destination endpoints ( sap-srfc-destination , sap-trfc-destination , and sap-qrfc-destination ) support the following URI options: Name Default Description stateful false If true , specifies that this endpoint initiates a SAP stateful session transacted false If true , specifies that this endpoint initiates a SAP transaction Options for RFC server endpoints The SAP RFC server endpoints ( sap-srfc-server and sap-trfc-server ) support the following URI options: Name Default Description stateful false If true , specifies that this endpoint initiates a SAP stateful session. propagateExceptions false (sap-trfc-server endpoint only) If true , specifies that this endpoint propagates exceptions back to the caller in SAP, instead of the exchange's exception handler Options for the IDoc List Server endpoint The SAP IDoc List Server endpoint ( sap-idoclist-server ) supports the following URI options: Name Default Description stateful false If true , specifies that this endpoint initiates a SAP stateful session. propagateExceptions false If true , specifies that this endpoint propagates exceptions back to the caller in SAP, instead of the exchange's exception handler 290.5. Summary of the RFC and IDoc endpoints The SAP component package provides the following RFC and IDoc endpoints: sap-srfc-destination JBoss Fuse SAP Synchronous Remote Function Call Destination Camel component. Use this endpoint in cases where Camel routes require synchronous delivery of requests to and responses from a SAP system. Note The sRFC protocol used by this component delivers requests and responses to and from a SAP system with best effort . In case of a communication error while sending a request, the completion status of a remote function call in the receiving SAP system remains in doubt . sap-trfc-destination JBoss Fuse SAP Transactional Remote Function Call Destination Camel component. Use this endpoint in cases where requests must be delivered to the receiving SAP system at most once . To accomplish this, the component generates a transaction ID, tid , which accompanies every request sent through the component in a route's exchange. The receiving SAP system records the tid accompanying a request before delivering the request; if the SAP system receives the request again with the same tid it does not deliver the request. Thus, if a route encounters a communication error when sending a request through an endpoint of this component, it can retry sending the request within the same exchange knowing it is delivered and executed only once. Note The tRFC protocol used by this component is asynchronous and does not return a response. Thus the endpoints of this component do not return a response message. Note This component does not guarantee the order of a series of requests through its endpoints, and the delivery and execution order of these requests may differ on the receiving SAP system due to communication errors and resends of a request. For guaranteed delivery order, please see the JBoss Fuse SAP Queued Remote Function Call Destination Camel component. sap-qrfc-destination JBoss Fuse SAP Queued Remote Function Call Destination Camel component. This component extends the capabilities of the JBoss Fuse Transactional Remote Function Call Destination camel component by adding in order delivery guarantees to the delivery of requests through its endpoints. Use this endpoint in cases where a series of requests depend on each other and must be delivered to the receiving SAP system at most once and in order . The component accomplishes the at most once delivery guarantees using the same mechanisms as the JBoss Fuse SAP Transactional Remote Function Call Destination Camel component. The ordering guarantee is accomplished by serializing the requests in the order they are received by the SAP system to an inbound queue . Inbound queues are processed by the QIN scheduler within SAP. When the inbound queue is activated , the QIN Scheduler executes the queue requests in order. Note The qRFC protocol used by this component is asynchronous and does not return a response. Thus the endpoints of this component do not return a response message. sap-srfc-server JBoss Fuse SAP Synchronous Remote Function Call Server Camel component. Use this component and its endpoints in cases where a Camel route is required to synchronously handle requests from and responses to a SAP system. sap-trfc-server JBoss Fuse SAP Transactional Remote Function Call Server Camel component. Use this endpoint in cases where the sending SAP system requires at most once delivery of its requests to a Camel route. To accomplish this, the sending SAP system generates a transaction ID, tid , which accompanies every request it sends to the component's endpoints. The sending SAP system first checks with the component whether a given tid has been received by it before sending a series of requests associated with the tid . The component checks the list of received tid s it maintains, record the sent tid if it is not in that list, and then respond to the sending SAP system, indicating whether the tid had already been recorded. If the tid has not been previously recorded, the sending SAP system transmits the series of requests. This enables a sending SAP system to reliably send a series of requests once to a camel route. sap-idoc-destination JBoss Fuse SAP IDoc Destination Camel component. Use this endpoint in cases where a Camel route is required to send a list of Intermediate Documents (IDocs) to a SAP system. sap-idoclist-destination JBoss Fuse SAP IDoc List Destination Camel component. Use this endpoint in cases where a Camel route is required to send a list of Intermediate documents (IDocs) to a SAP system. sap-qidoc-destination JBoss Fuse SAP Queued IDoc Destination Camel component. Use this component and its endpoints in cases where a Camel route is required to send a list of Intermediate documents (IDocs) to a SAP system in order . sap-qidoclist-destination JBoss Fuse SAP Queued IDoc List Destination Camel component. Use this component and its endpoints in cases where a camel route is required to send a list of Intermediate documents (IDocs) to a SAP system in order . sap-idoclist-server JBoss Fuse SAP IDoc List Server Camel component. Use this endpoint in cases where a sending SAP system requires delivery of Intermediate Document lists to a Camel route. This component uses the tRFC protocol to communicate with SAP as described in the sap-trfc-server-standalone quick start. SAP RFC destination endpoint An RFC destination endpoint supports outbound communication to SAP, which enable these endpoints to make RFC calls out to ABAP function modules in SAP. An RFC destination endpoint is configured to make an RFC call to a specific ABAP function over a specific connection to a SAP instance. An RFC destination is a logical designation for an outbound connection and has a unique name. An RFC destination is specified by a set of connection parameters called destination data . An RFC destination endpoint extracts an RFC request from the input message of the IN-OUT exchanges it receives and dispatch that request in a function call to SAP. The output message of the exchange contains the response from the function call. Since SAP RFC destination endpoints only support outbound communication, an RFC destination endpoint only supports the creation of producers. SAP RFC server endpoint An RFC server endpoint supports inbound communication from SAP, which enables ABAP applications in SAP to make RFC calls into server endpoints. An ABAP application interacts with an RFC server endpoint as if it were a remote function module. An RFC server endpoint is configured to receive an RFC call to a specific RFC function over a specific connection from a SAP instance. An RFC server is a logical designation for an inbound connection and has a unique name. An RFC server is specified by a set of connection parameters called server data . An RFC server endpoint handles an incoming RFC request and dispatch it as the input message of an IN-OUT exchange. The output message of the exchange is returned as the response of the RFC call. Since SAP RFC server endpoints only support inbound communication, an RFC server endpoint only supports the creation of consumers. SAP IDoc and IDoc list destination endpoints An IDoc destination endpoint supports outbound communication to SAP, which can then perform further processing on the IDoc message. An IDoc document represents a business transaction, which can easily be exchanged with non-SAP systems. An IDoc destination is specified by a set of connection parameters called destination data . An IDoc list destination endpoint is similar to an IDoc destination endpoint, except that the messages it handles consist of a list of IDoc documents. SAP IDoc list server endpoint An IDoc list server endpoint supports inbound communication from SAP, enabling a Camel route to receive a list of IDoc documents from a SAP system. An IDoc list server is specified by a set of connection parameters called server data . metadata repositories A metadata repository is used to store the following kinds of metadata: Interface descriptions of function modules This metadata is used by the JCo and ABAP runtimes to check RFC calls to ensure the type-safe transfer of data between communication partners before dispatching those calls. A repository is populated with repository data. Repository data is a map of named function templates. A function template contains the metadata describing all the parameters and their typing information passed to and from a function module and has the unique name of the function module it describes. IDoc type descriptions This metadata is used by the IDoc runtime to ensure that the IDoc documents are correctly formatted before being sent to a communication partner. A basic IDoc type consists of a name, a list of permitted segments, and a description of the hierarchical relationship between the segments. Some additional constraints can be imposed on the segments: a segment can be mandatory or optional; and it is possible to specify a minimum/maximum range for each segment (defining the number of allowed repetitions of that segment). SAP destination and server endpoints thus require access to a repository to send and receive RFC calls and to send and receive IDoc documents. For RFC calls, the metadata for all function modules invoked and handled by the endpoints must reside within the repository; and for IDoc endpoints, the metadata for all IDoc types and IDoc type extensions handled by the endpoints must reside within the repository. The location of the repository used by a destination and server endpoint is specified in the destination data and the server data of their respective connections. In the case of a SAP destination endpoint, the repository it uses typically resides in a SAP system and it defaults to the SAP system it is connected to. This default requires no explicit configuration in the destination data. Furthermore, the metadata for the remote function call that a destination endpoint makes already exist in a repository for any existing function module that it calls. The metadata for calls made by destination endpoints thus require no configuration in the SAP component. On the other hand, the metadata for function calls handled by server endpoints do not typically reside in the repository of a SAP system and must instead be provided by a repository residing in the SAP component. The SAP component maintains a map of named metadata repositories. The name of a repository corresponds to the name of the server to which it provides metadata. 290.6. Configuration The SAP component maintains three maps to store destination data, server data , and repository data. The destination data store and the server data store use a special configuration object, SapConnectionConfiguration , which automatically gets injected into the SAP component (in the context of Blueprint XML configuration or Spring XML configuration files). The repository data store must be configured directly on the relevant SAP component. 290.6.1. Configuration Overview Overview The SAP component maintains three maps to store destination data, server data , and repository data. The component's property, destinationDataStore , stores destination data keyed by destination name. The property, serverDataStore , stores server data keyed by server name. The property, repositoryDataStore , stores repository data keyed by repository name. You must pass these configurations to the component during its initialization. Example The following example shows how to configure a sample destination data store and a sample server data store in a Blueprint XML file. The sap-configuration bean (of type SapConnectionConfiguration ) is automatically injected into any SAP component used in this XML file. <?xml version="1.0" encoding="UTF-8"?> <blueprint ... > ... <!-- Configures the Inbound and Outbound SAP Connections --> <bean id="sap-configuration" class="org.fusesource.camel.component.sap.SapConnectionConfiguration"> <property name="destinationDataStore"> <map> <entry key="quickstartDest" value-ref="quickstartDestinationData" /> </map> </property> <property name="serverDataStore"> <map> <entry key="quickstartServer" value-ref="quickstartServerData" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id="quickstartDestinationData" class="org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl"> <property name="ashost" value="example.com" /> <property name="sysnr" value="00" /> <property name="client" value="000" /> <property name="user" value="username" /> <property name="passwd" value="passowrd" /> <property name="lang" value="en" /> </bean> <!-- Configures an Inbound SAP Connection --> <!-- *** Please enter the connection property values for your environment ** --> <bean id="quickstartServerData" class="org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl"> <property name="gwhost" value="example.com" /> <property name="gwserv" value="3300" /> <!-- Do not change the following property values --> <property name="progid" value="QUICKSTART" /> <property name="repositoryDestination" value="quickstartDest" /> <property name="connectionCount" value="2" /> </bean> </blueprint> 290.6.2. Destination Configuration Overview The configurations for destinations are maintained in the destinationDataStore property of the SAP component. Each entry in this map configures a distinct outbound connection to an SAP instance. The key for each entry is the name of the outbound connection and is used in the destinationName component of a destination endpoint URI as described in the URI format section. The value for each entry is a destination data configuration object - org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl - that specifies the configuration of an outbound SAP connection. Sample destination configuration The following Blueprint XML code shows how to configure a sample destination with the name quickstartDest . <?xml version="1.0" encoding="UTF-8"?> <blueprint ... > ... <!-- Create interceptor to support tRFC processing --> <bean id="currentProcessorDefinitionInterceptor" class="org.fusesource.camel.component.sap.CurrentProcessorDefinitionInterceptStrategy" /> <!-- Configures the Inbound and Outbound SAP Connections --> <bean id="sap-configuration" class="org.fusesource.camel.component.sap.SapConnectionConfiguration"> <property name="destinationDataStore"> <map> <entry key="quickstartDest" value-ref="quickstartDestinationData" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id="quickstartDestinationData" class="org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl"> <property name="ashost" value="example.com" /> <property name="sysnr" value="00" /> <property name="client" value="000" /> <property name="user" value="username" /> <property name="passwd" value="password" /> <property name="lang" value="en" /> </bean> </blueprint> For example, after configuring the destination as shown in the preceding Blueprint XML file, you could invoke the BAPI_FLCUST_GETLIST remote function call on the quickstartDest destination using the following URI: sap-srfc-destination:quickstartDest:BAPI_FLCUST_GETLIST Interceptor for tRFC and qRFC destinations The preceding sample destination configuration shows the instantiation of a CurrentProcessorDefinitionInterceptStrategy object. This object installs an interceptor in the Camel runtime, enabling the Camel SAP component to keep track of its position within a Camel route while handling RFC transactions. For more details, see the section called "Transactional RFC destination endpoints" . Important This interceptor must be installed in the Camel runtime to properly manage outbound transactional RFC communication. It is critically important for transactional RFC destination endpoints (such as sap-trfc-destination and sap-qrfc-destination ). The Destination RFC Transaction Handlers issue warnings into the Camel log if the strategy is not found at runtime, and in this situation the Camel runtime must to be re-provisioned and restarted to properly manage outbound transactional RFC communication. Log-in and authentication options The following table lists the log-in and authentication options for configuring a destination in the SAP destination data store: Name Default Value Description client SAP client, mandatory log-in parameter user Log-in user, log-in parameter for password based authentication aliasUser Log-in user alias, can be used instead of log-in user userId User identity to use for log-in to the ABAP AS. Used by the JCo runtime, if the destination configuration uses SSO/assertion ticket, certificate, current user, or SNC environment for authentication. If there is no user or user alias, the user ID is mandatory. This ID is not used by the SAP backend, the JCo runtime uses it locally. passwd Log-in password, log-in parameter for password-based authentication lang Log-in language to use instead of the user language mysapsso2 Use the specified SAP Cookie Version 2 as a log-in ticket for SSO based authentication x509cert Use the specified X509 certificate for certificate based authentication lcheck Postpone the authentication until the first call - 1 (enable). Use lcheck in special cases only . useSapGui Use a visible, hidden, or do not use SAP GUI codePage Additional log-in parameter to define the codepage that is used to convert the log-in parameters. Use codePage in special cases only. getsso2 Order a SSO ticket after log-in, the obtained ticket is available in the destination attributes denyInitialPassword If set to 1 , using initial passwords leads to an exception (default is 0 ). Connection options The following table lists the connection options for configuring a destination in the SAP destination data store: Name Default Value Description saprouter SAP Router string for connection to systems behind a SAP Router. SAP Router string contains the chain of SAP Routers and its port numbers and has the form: (/H/<host>[/S/<port>])+ sysnr System number of the SAP ABAP application server, mandatory for a direct connection ashost SAP ABAP application server, mandatory for a direct connection mshost SAP message server, mandatory property for a load balancing connection msserv SAP message server port, optional property for a load balancing connection. To resolve the service names sapmsXXX, the network layer of the operating system performs a look-up in etc/services . If using port numbers instead of symbolic service names, there are no look-ups and additional entries are not needed. gwhost Allows specifying a concrete gateway. Use this for establishing the connection to an application server. If not specified the gateway on the application server is used. gwserv Set this when using gwhost . Allows specifying the port used on that gateway. If not specified the port of the gateway on the application server is used. To resolve the service names sapgwXXX, the network layer of the operating system performs a look-up in etc/services . If using port numbers instead of symbolic service names, there are no look-ups and additional entries are not needed. r3name System ID of the SAP system, mandatory property for a load balancing connection. group Group of SAP application servers, mandatory property for a load balancing connection network LAN Set this value depending on the network quality between JCo and your target system to optimize performance. The valid values are LAN or WAN (which is relevant for fast serialization only). WAN uses a slower but more efficient compression algorithm, with data analysis for further compression options. LAN uses a very fast compression algorithm, with only basic data analysis. With the LAN option, the compression ratio is not as efficient but the network transfer time is considered to be less significant. The default setting is LAN . serializationFormat rowBased Format for serialization. Can be rowBased (default) or columnBased (fast serialization). Connection pool options The following table lists the connection pool options for configuring a destination in the SAP destination data store: Name Default Value Description peakLimit 0 The maximum number of simultaneously active outbound connections for a destination. A value of 0 allows an unlimited number of active connections. Otherwise, it is automatically increased to poolCapacity . Default setting is the value of poolCapacity if configured. If poolCapacity is not specified, the default is 0 (unlimited). poolCapacity 1 The maximum number of idle outbound connections kept open by the destination. A value of 0 means no connection pooling (default is 1 ). expirationTime The minimum time in milliseconds a free connection held internally by the destination must be kept open. expirationPeriod The period in milliseconds after which the destination checks expiration for the released connections. maxGetTime The maximum time in milliseconds to wait for a connection, if the maximum allowed number of connections has already been allocated by the application. Secure network connection options The following table lists the secure network options for configuring a destination in the SAP destination data store: Name Default Value Description sncMode Secure network connection (SNC) mode, 0 (off) or 1 (on) sncPartnername SNC partner, for example: p:CN=R3, O=XYZ-INC, C=EN sncQop SNC level of security: 1 to 9 sncMyname Own SNC name. Overrides environment settings sncLibrary Path to library that provides SNC service Repository options The following table lists the repository options for configuring a destination in the SAP destination data store: Name Default Value Description repositoryDest Specifies which destination to use as repository. repositoryUser This defines the user for repository calls, if a repository destination has not been defined. This enables you to use a different user for repository look-ups. repositoryPasswd The password for a repository user. Mandatory when using a repository user. repositorySnc (Optional) If SNC is used for this destination, it is possible to turn it off for repository connections, if this property is set to 0 . Default setting is the value of jco.client.snc_mode . For special cases only. repositoryRoundtripOptimization Enable the RFC_METADATA_GET API, which provides repository data in one round trip. 1 Activates use of RFC_METADATA_GET in ABAP System, 0 Deactivates RFC_METADATA_GET in ABAP System. If the property is not set, the destination initially does a remote call to check whether RFC_METADATA_GET is available. If it is available, the destination uses it. Note If the repository is already initialized (for example because it is used by some other destination) this property does not have any effect. Generally, this property is related to the ABAP System, and should have the same value on all destinations pointing to the same ABAP System. See note 1456826 for backend prerequisites. Trace configuration options The following table lists the trace configuration options for configuring a destination in the SAP destination data store: Name Default Value Description trace Enable/disable RFC trace ( 0 or 1 ) cpicTrace Enable/disable CPIC trace [0..3] 290.6.3. Server Configuration Overview The configurations for servers are maintained in the serverDataStore property of the SAP component. Each entry in this map configures a distinct inbound connection from an SAP instance. The key for each entry is the name of the outbound connection and is used in the serverName component of a server endpoint URI as described in the URI format section. The value for each entry is a server data configuration object , org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl , that defines the configuration of an inbound SAP connection. Sample server configuration The following Blueprint XML code shows how to create a sample server configuration with the name, quickstartServer . <?xml version="1.0" encoding="UTF-8"?> <blueprint ... > ... <!-- Configures the Inbound and Outbound SAP Connections --> <bean id="sap-configuration" class="org.fusesource.camel.component.sap.SapConnectionConfiguration"> <property name="destinationDataStore"> <map> <entry key="quickstartDest" value-ref="quickstartDestinationData" /> </map> </property> <property name="serverDataStore"> <map> <entry key="quickstartServer" value-ref="quickstartServerData" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id="quickstartDestinationData" class="org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl"> <property name="ashost" value="example.com" /> <property name="sysnr" value="00" /> <property name="client" value="000" /> <property name="user" value="username" /> <property name="passwd" value="passowrd" /> <property name="lang" value="en" /> </bean> <!-- Configures an Inbound SAP Connection --> <!-- *** Please enter the connection property values for your environment ** --> <bean id="quickstartServerData" class="org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl"> <property name="gwhost" value="example.com" /> <property name="gwserv" value="3300" /> <!-- Do not change the following property values --> <property name="progid" value="QUICKSTART" /> <property name="repositoryDestination" value="quickstartDest" /> <property name="connectionCount" value="2" /> </bean> </blueprint> Notice how this example also configures a destination connection, quickstartDest , which the server uses to retrieve metadata from a remote SAP instance. This destination is configured in the server data through the repositoryDestination option. If you do not configure this option, you would need to create a local metadata repository instead (see Section 290.6.4, "Repository Configuration" ). For example, after configuring the destination as shown in the preceding Blueprint XML file, you could handle the BAPI_FLCUST_GETLIST remote function call from an invoking client, using the following URI: sap-srfc-server:quickstartServer:BAPI_FLCUST_GETLIST Required options The required options for the server data configuration object are, as follows: Name Default Value Description gwhost Gateway host to register the server connection with. gwserv Gateway service, which is the port on which a registration can be done. To resolve the service names sapgwXXX , the network layer of the operating system performs a look-up in etc/services . If using port numbers instead of symbolic service names, there are no look-ups and additional entries are not needed. progid The program ID with which the registration is done. Serves as identifier on the gateway and in the destination in the ABAP system. repositoryDestination Specifies a destination name that the server can use to retrieve metadata from a metadata repository hosted in a remote SAP server. connectionCount The number of connections to register with the gateway. Secure network connection options The secure network connection options for the server data configuration object are, as follows: Name Default Value Description sncMode Secure network connection (SNC) mode, 0 (off) or 1 (on) sncQop SNC level of security, 1 to 9 sncMyname SNC name of your server. Overrides the default SNC name. Typically something like p:CN=JCoServer, O=ACompany, C=EN . sncLib Path to library which provides SNC service. If this property is not provided, the value of the jco.middleware.snc_lib property is used instead Other options The other options for the server data configuration object are, as follows: Name Default Value Description saprouter SAP router string to use for a system protected by a firewall, which can therefore only be reached through a SAProuter, when registering the server at the gateway of that ABAP System. A typical router string is /H/firewall.hostname/H/ maxStartupDelay The maximum time (in seconds) between two start-up attempts in case of failures. Initially, the waiting time is doubled from 1 second after each start-up failure until the maximum value is reached, or the server can be started successfully. trace Enable/disable RFC trace ( 0 or 1 ) workerThreadCount The maximum number of threads used by the server connection. If not set, the value for the connectionCount is used as the workerThreadCount . The maximum number of threads can not exceed 99. workerThreadMinCount The minimum number of threads used by server connection. If not set, the value for connectionCount is used as the workerThreadMinCount . 290.6.4. Repository Configuration Overview The configurations for repositories are maintained in the repositoryDataStore property of the SAP Component. Each entry in this map configures a distinct repository. The key for each entry is the name of the repository and this key also corresponds to the name of server to which this repository is attached. The value of each entry is a repository data configuration object, org.fusesource.camel.component.sap.model.rfc.impl.RepositoryDataImpl , that defines the contents of a metadata repository. A repository data object is a map of function template configuration objects, org.fuesource.camel.component.sap.model.rfc.impl.FunctionTemplateImpl . Each entry in this map specifies the interface of a function module and the key for each entry is the name of the function module specified. Repository data example The following code shows a simple example of configuring a metadata repository: <?xml version="1.0" encoding="UTF-8"?> <blueprint ... > ... <!-- Configures the sap-srfc-server component --> <bean id="sap-configuration" class="org.fusesource.camel.component.sap.SapConnectionConfiguration"> <property name="repositoryDataStore"> <map> <entry key="nplServer" value-ref="nplRepositoryData" /> </map> </property> </bean> <!-- Configures a metadata Repository --> <bean id="nplRepositoryData" class="org.fusesource.camel.component.sap.model.rfc.impl.RepositoryDataImpl"> <property name="functionTemplates"> <map> <entry key="BOOK_FLIGHT" value-ref="bookFlightFunctionTemplate" /> </map> </property> </bean> ... </blueprint> Function template properties The interface of a function module consists of four parameter lists by which data is transferred back and forth to the function module in an RFC call. Each parameter list consists of one or more fields, each of which is a named parameter transferred in an RFC call. The following parameter lists and exception list are supported: The import parameter list contains parameter values that are sent to a function module in an RFC call; The export parameter list contains parameter values that are returned by a function module in an RFC call; The changing parameter list contains parameter values that are sent to and returned by a function module in an RFC call; The table parameter list contains internal table values that are sent to and returned by a function module in an RFC call. The interface of a function module also consists of an exception list of ABAP exceptions that may be raised when the module is invoked in an RFC call. A function template describes the name and type of parameters in each parameter list of a function interface and the ABAP exceptions thrown by the function. A function template object maintains five property lists of metadata objects, as described in the following table. Property Description importParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the parameters that are sent in an RFC call to a function module. changingParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the parameters that sent and returned in an RFC call to and from a function module. exportParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the parameters that are returned in an RFC call from a function module. tableParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the table parameters that are sent and returned in an RFC call to and from a function module. exceptionList A list of ABAP exception metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.AbapExceptionImpl . Specifies the ABAP exceptions potentially raised in an RFC call of function module. Function template example The following example shows an outline of how to configure a function template: List field metadata properties A list field metadata object, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl , specifies the name and type of a field in a parameter list. For an elementary parameter field ( CHAR , DATE , BCD , TIME , BYTE , NUM , FLOAT , INT , INT1 , INT2 , DECF16 , DECF34 , STRING , XSTRING ), the following table lists the configuration properties that may be set on a list field metadata object: Name Default Value Description name - The name of the parameter field. type - The parameter type of the field. byteLength - The field length in bytes for a non-Unicode layout. This value depends on the parameter type. See Section 290.9, "Message Body for RFC" . unicodeByteLength - The field length in bytes for a Unicode layout. This value depends on the parameter type. See Section 290.9, "Message Body for RFC" . decimals 0 The number of decimals in field value; only required for parameter types BCD and FLOAT. See Section 290.9, "Message Body for RFC" . optional false If true , the field is optional and need not be set in a RFC call Note All elementary parameter fields require that the name , type , byteLength and unicodeByteLength properties be specified in the field metadata object. In addition, the BCD , FLOAT , DECF16 and DECF34 fields require the decimal property to be specified in the field metadata object. For a complex parameter field of type TABLE or STRUCTURE , the following table lists the configuration properties that may be set on a list field metadata object: Name Default Value Description name - The name of the parameter field type - The parameter type of the field recordMetaData - The metadata for the structure or table. A record metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , is passed to specify the fields in the structure or table rows. optional false If true , the field is optional and need not be set in a RFC call Note All complex parameter fields require that the name , type and recordMetaData properties be specified in the field metadata object. The value of the recordMetaData property is a record field metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , which specifies the structure of a nested structure or the structure of a table row. Elementary list field metadata example The following metadata configuration specifies an optional, 24-digit packed BCD number parameter with two decimal places named TICKET_PRICE : <bean class="org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMetaDataImpl"> <property name="name" value="TICKET_PRICE" /> <property name="type" value="BCD" /> <property name="byteLength" value="12" /> <property name="unicodeByteLength" value="24" /> <property name="decimals" value="2" /> <property name="optional" value="true" /> </bean> Complex list field metadata example The following metadata configuration specifies a required TABLE parameter named CONNINFO with a row structure specified by the connectionInfo record metadata object: Record metadata properties A record metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , specifies the name and contents of a nested STRUCTURE or the row of a TABLE parameter. A record metadata object maintains a list of record field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl , which specify the parameters that reside in the nested structure or table row. The following table lists configuration properties that may be set on a record metadata object: Name Default Value Description name - The name of the record. recordFieldMetaData - The list of record field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl . Specifies the fields contained within the structure. Note All properties of the record metadata object are required. Record metadata example The following example shows how to configure a record metadata object: <bean id="connectionInfo" class="org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl"> <property name="name" value="CONNECTION_INFO" /> <property name="recordFieldMetaData"> <list> ... </list> </property> </bean> Record field metadata properties A record field metadata object, org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl , specifies the name and type of a parameter field withing a structure. A record field metadata object is similar to a parameter field metadata object, except that the offsets of the individual field locations within the nested structure or table row must be additionally specified. The non-Unicode and Unicode offsets of an individual field must be calculated and specified from the sum of non-Unicode and Unicode byte lengths of the preceding fields in the structure or row. Note Failure to properly specify the offsets of fields in nested structures and table rows causes the field storage of parameters in the underlying JCo and ABAP runtimes to overlap and prevent the proper transfer of values in RFC calls. For an elementary parameter field ( CHAR , DATE , BCD , TIME , BYTE , NUM , FLOAT , INT , INT1 , INT2 , DECF16 , DECF34 , STRING , XSTRING ), the following table lists the configuration properties that may be set on a record field metadata object: Name Default Value Description name - The name of the parameter field type - The parameter type of the field byteLength - The field length in bytes for a non-Unicode layout. This value depends on the parameter type. See Section 290.9, "Message Body for RFC" . unicodeByteLength - The field length in bytes for a Unicode layout. This value depends on the parameter type. See Section 290.9, "Message Body for RFC" . byteOffset - The field offset in bytes for non-Unicode layout. This offset is the byte location of the field within the enclosing structure. unicodeByteOffset - The field offset in bytes for Unicode layout. This offset is the byte location of the field within the enclosing structure. decimals 0 The number of decimals in field value; only required for parameter types BCD and FLOAT . See Section 290.9, "Message Body for RFC" . For a complex parameter field of type TABLE or STRUCTURE , the following table lists the configuration properties that may be set on a record field metadata object: Name Default Value Description name - The name of the parameter field type - The parameter type of the field byteOffset - The field offset in bytes for non-Unicode layout. This offset is the byte location of the field within the enclosing structure. unicodeByteOffset - The field offset in bytes for Unicode layout. This offset is the byte location of the field within the enclosing structure. recordMetaData - The metadata for the structure or table. A record metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , is passed to specify the fields in the structure or table rows. Elementary record field metadata example The following metadata configuration specifies a DATE field parameter named ARRDATE located 85 bytes into the enclosing structure in the case of a non-Unicode layout and located 170 bytes into the enclosing structure in the case of a Unicode layout: Complex record field metadata example The following metadata configuration specifies a STRUCTURE field parameter named FLTINFO with a structure specified by the flightInfo record metadata object. The parameter is located at the beginning of the enclosing structure in both the case of a non-Unicode and Unicode layout. 290.7. Message Headers The SAP component supports the following message headers: Header Description CamelSap.scheme The URI scheme of the last endpoint to process the message. Use one of the following values: sap-srfc-destination sap-trfc-destination sap-qrfc-destination sap-srfc-server sap-trfc-server sap-idoc-destination sap-idoclist-destination sap-qidoc-destination sap-qidoclist-destination sap-idoclist-server CamelSap.destinationName The destination name of the last destination endpoint to process the message. CamelSap.serverName The server name of the last server endpoint to process the message. CamelSap.queueName The queue name of the last queuing endpoint to process the message. CamelSap.rfcName The RFC name of the last RFC endpoint to process the message. CamelSap.idocType The IDoc type of the last IDoc endpoint to process the message. CamelSap.idocTypeExtension The IDoc type extension, if any, of the last IDoc endpoint to process the message. CamelSap.systemRelease The system release, if any, of the last IDoc endpoint to process the message. CamelSap.applicationRelease The application release, if any, of the last IDoc endpoint to process the message. 290.8. Exchange Properties The SAP component adds the following exchange properties: Property Description CamelSap.destinationPropertiesMap A map containing the properties of each SAP destination encountered by the exchange. The map is keyed by destination name and each entry is a java.util.Properties object containing the configuration properties of that destination. CamelSap.serverPropertiesMap A map containing the properties of each SAP server encountered by the exchange. The map is keyed by server name and each entry is a java.util.Properties object containing the configuration properties of that server. 290.9. Message Body for RFC Request and response objects An SAP endpoint expects to receive a message with a message body containing an SAP request object and returns a message with a message body containing an SAP response object. SAP requests and responses are fixed map data structures containing named fields with each field having a predefined data type. Note The named fields in an SAP request and response are specific to an SAP endpoint, with each endpoint defining the parameters in the SAP request and the acceptable response. An SAP endpoint provides factory methods to create the request and response objects that are specific to it. Structure objects Both SAP request and response objects are represented in Java as a structure object which supports the org.fusesource.camel.component.sap.model.rfc.Structure interface. This interface extends both the java.util.Map and org.eclipse.emf.ecore.EObject interfaces. The field values in a structure object are accessed through the field's getter methods in the map interface. In addition, the structure interface provides a type-restricted method to retrieve field values. Structure objects are implemented in the component runtime using the Eclipse Modeling Framework (EMF) and support that framework's EObject interface. Instances of a structure object have attached metadata which define and restrict the structure and contents of the map of fields it provides. This metadata can be accessed and introspected using the standard methods provided by EMF. Please refer to the EMF documentation for further details. Note Attempts to get a parameter not defined on a structure object returns null. Attempts to set a parameter not defined on a structure throws an exception as well as attempts to set the value of a parameter with an incorrect type. As discussed in the following sections, structure objects can contain fields that contain values of the complex field types, STRUCTURE and TABLE . Note It is unnecessary to create instances of these types and add them to the structure. Instances of these field values are created on demand if necessary when accessed in the enclosing structure. Field types The fields that reside within the structure object of an SAP request or response may be either elementary or complex . An elementary field contains a single scalar value, whereas a complex field contains one or more fields of an elementary or complex type. Elementary field types An elementary field may be a character, numeric, hexadecimal or string field type. The following table summarizes the types of elementary fields that may reside in a structure object: Field Type Corresponding Java Type Byte Length Unicode Byte Length Number Decimals Digits Description CHAR java.lang.String 1 to 65535 1 to 65535 - ABAP Type 'C': Fixed sized character string DATE java.util.Date 8 16 - ABAP Type 'D': Date (format: YYYYMMDD) BCD java.math.BigDecimal 1 to 16 1 to 16 0 to 14 ABAP Type 'P': Packed BCD number. A BCD number contains two digits per byte. TIME java.util.Date 6 12 - ABAP Type 'T': Time (format: HHMMSS) BYTE byte[] 1 to 65535 1 to 65535 - ABAP Type 'X':Fixed sized byte array NUM java.lang.String 1 to 65535 1 to 65535 - ABAP Type 'N': Fixed sized numeric character string FLOAT java.lang.Double 8 8 0 to 15 ABAP Type 'F': Floating point number INT java.lang.Integer 4 4 - ABAP Type 'I': 4-byte Integer INT2 java.lang.Integer 2 2 - ABAP Type 'S': 2-byte Integer INT1 java.lang.Integer 1 1 - ABAP Type 'B': 1-byte Integer DECF16 java.match.BigDecimal 8 8 16 ABAP Type 'decfloat16': 8 -byte Decimal Floating Point Number DECF34 java.math.BigDecimal 16 16 34 ABAP Type 'decfloat34': 16-byte Decimal Floating Point Number STRING java.lang.String 8 8 - ABAP Type 'G': Variable length character string XSTRING byte[] 8 8 - ABAP Type 'Y': Variable length byte array Character field types A character field contains a fixed sized character string that may use either a non-Unicode or Unicode character encoding in the underlying JCo and ABAP runtimes. Non-Unicode character strings encode one character per byte. Unicode characters strings are encoded in two bytes using UTF-16 encoding. Character field values are represented in Java as java.lang.String objects and the underlying JCo runtime is responsible for the conversion to their ABAP representation. A character field declares its field length in its associated byteLength and unicodeByteLength properties, which determine the length of the field's character string in each encoding system. CHAR A CHAR character field is a text field containing alphanumeric characters and corresponds to the ABAP type C. NUM A NUM character field is a numeric text field containing numeric characters only and corresponds to the ABAP type N. DATE A DATE character field is an 8 character date field with the year, month and day formatted as YYYYMMDD and corresponds to the ABAP type D. TIME A TIME character field is a 6 character time field with the hours, minutes and seconds formatted as HHMMSS and corresponds to the ABAP type T. Numeric field types A numeric field contains a number. The following numeric field types are supported: INT An INT numeric field is an integer field stored as a 4-byte integer value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type I. An INT field value is represented in Java as a java.lang.Integer object. INT2 An INT2 numeric field is an integer field stored as a 2-byte integer value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type S. An INT2 field value is represented in Java as a java.lang.Integer object. INT1 An INT1 field is an integer field stored as a 1-byte integer value in the underlying JCo and ABAP runtimes value and corresponds to the ABAP type B. An INT1 field value is represented in Java as a java.lang.Integer object. FLOAT A FLOAT field is a binary floating point number field stored as an 8-byte double value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type F. A FLOAT field declares the number of decimal digits that the field's value contains in its associated decimal property. In the case of a FLOAT field, this decimal property can have a value between 1 and 15 digits. A FLOAT field value is represented in Java as a java.lang.Double object. BCD A BCD field is a binary coded decimal field stored as a 1 to 16 byte packed number in the underlying JCo and ABAP runtimes and corresponds to the ABAP type P. A packed number stores two decimal digits per byte. A BCD field declares its field length in its associated byteLength and unicodeByteLength properties. In the case of a BCD field, these properties can have a value between 1 and 16 bytes and both properties have the same value. A BCD field declares the number of decimal digits that the field's value contains in its associated decimal property. In the case of a BCD field, this decimal property can have a value between 1 and 14 digits. A BCD field value is represented in Java as a java.math.BigDecimal . DECF16 A DECF16 field is a decimal floating point stored as an 8-byte IEEE 754 decimal64 floating point value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type decfloat16 . The value of a DECF16 field has 16 decimal digits. The value of a DECF16 field is represented in Java as java.math.BigDecimal . DECF34 A DECF34 field is a decimal floating point stored as a 16-byte IEEE 754 decimal128 floating point value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type decfloat34 . The value of a DECF34 field has 34 decimal digits. The value of a DECF34 field is represented in Java as java.math.BigDecimal . Hexadecimal field types A hexadecimal field contains raw binary data. The following hexadecimal field types are supported: BYTE A BYTE field is a fixed sized byte string stored as a byte array in the underlying JCo and ABAP runtimes and corresponds to the ABAP type X. A BYTE field declares its field length in its associated byteLength and unicodeByteLength properties. In the case of a BYTE field, these properties can have a value between 1 and 65535 bytes and both properties have the same value. The value of a BYTE field is represented in Java as a byte[] object. String field types A string field references a variable length string value. The length of that string value is not fixed until runtime. The storage for the string value is dynamically created in the underlying JCo and ABAP runtimes. The storage for the string field itself is fixed and contains only a string header. STRING A STRING field refers to a character string and is stored in the underlying JCo and ABAP runtimes as an 8-byte value. It corresponds to the ABAP type G. The value of the STRING field is represented in Java as a java.lang.String object. XSTRING An XSTRING field refers to a byte string and is stored in the underlying JCo and ABAP runtimes as an 8-byte value. It corresponds to the ABAP type Y. The value of the STRING field is represented in Java as a byte[] object. Complex field types A complex field may be either a structure or table field type. The following table summarizes these complex field types. Field Type Corresponding Java Type Byte Length Unicode Byte Length Number Decimals Digits Description STRUCTURE org.fusesource.camel.component.sap.model.rfc.Structure Total of individual field byte lengths Total of individual field Unicode byte lengths - ABAP Type 'u' & 'v': Heterogeneous Structure TABLE org.fusesource.camel.component.sap.model.rfc.Table Byte length of row structure Unicode byte length of row structure - ABAP Type 'h': Table Structure field types A STRUCTURE field contains a structure object and is stored in the underlying JCo and ABAP runtimes as an ABAP structure record. It corresponds to either an ABAP type u or v . The value of a STRUCTURE field is represented in Java as a structure object with the interface org.fusesource.camel.component.sap.model.rfc.Structure . Table field types A TABLE field contains a table object and is stored in the underlying JCo and ABAP runtimes as an ABAP internal table. It corresponds to the ABAP type h . The value of the field is represented in Java by a table object with the interface org.fusesource.camel.component.sap.model.rfc.Table . Table objects A table object is a homogeneous list data structure containing rows of structure objects with the same structure. This interface extends both the java.util.List and org.eclipse.emf.ecore.EObject interfaces. public interface Table<S extends Structure> extends org.eclipse.emf.ecore.EObject, java.util.List<S> { /** * Creates and adds table row at end of row list */ S add(); /** * Creates and adds table row at index in row list */ S add(int index); } The list of rows in a table object are accessed and managed using the standard methods defined in the list interface. In addition, the table interface provides two factory methods for creating and adding structure objects to the row list. Table objects are implemented in the component runtime using the Eclipse Modeling Framework (EMF) and support that framework's EObject interface. Instances of a table object have attached metadata which define and restrict the structure and contents of the rows it provides. This metadata can be accessed and introspected using the standard methods provided by EMF. Please refer to the EMF documentation for further details. Note Attempts to add or set a row structure value of the wrong type throws an exception. 290.10. Message Body for IDoc IDoc message type When using one of the IDoc Camel SAP endpoints, the type of the message body depends on which particular endpoint you are using. For a sap-idoc-destination endpoint or a sap-qidoc-destination endpoint, the message body is of Document type: For a sap-idoclist-destination endpoint, a sap-qidoclist-destination endpoint, or a sap-idoclist-server endpoint, the message body is of DocumentList type: The IDoc document model For the Camel SAP component, an IDoc document is modelled using the Eclipse Modelling Framework (EMF), which provides a wrapper API around the underlying SAP IDoc API. The most important types in this model are: The Document type represents an IDoc document instance. In outline, the Document interface exposes the following methods: The following kinds of method are exposed by the Document interface: Methods for accessing the control record Most of the methods are for accessing or modifying field values of the IDoc control record. These methods are of the form AttributeName , AttributeName , where AttributeName is the name of a field value (see Table 290.2, "IDoc Document Attributes" ). Method for accessing the document contents The getRootSegment method provides access to the document contents (IDoc data records), returning the contents as a Segment object. Each Segment object can contain an arbitrary number of child segments, and the segments can be nested to an arbitrary degree. Note The precise layout of the segment hierarchy is defined by the particular IDoc type of the document. When creating (or reading) a segment hierarchy, therefore, you must be sure to follow the exact structure as defined by the IDoc type. The Segment type is used to access the data records of the IDoc document, where the segments are laid out in accordance with the structure defined by the document's IDoc type. In outline, the Segment interface exposes the following methods: The getChildren(String segmentType) method is particularly useful for adding new (nested) children to a segment. It returns an object of type, SegmentList , which is defined as follows: Hence, to create a data record of E1SCU_CRE type, you could use Java code like the following: Segment rootSegment = document.getRootSegment(); Segment E1SCU_CRE_Segment = rootSegment.getChildren("E1SCU_CRE").add(); How an IDoc is related to a Document object According to the SAP documentation, an IDoc document consists of the following main parts: Control record The control record (which contains the metadata for the IDoc document) is represented by the attributes on the Document object - see Table 290.2, "IDoc Document Attributes" for details. Data records The data records are represented by the Segment objects, which are constructed as a nested hierarchy of segments. You can access the root segment through the Document.getRootSegment method. Status records In the Camel SAP component, the status records are not represented by the document model. But you do have access to the latest status value through the status attribute on the control record. Example of creating a Document instance For example, Example 290.1, "Creating an IDoc Document in Java" shows how to create an IDoc document with the IDoc type, FLCUSTOMER_CREATEFROMDATA01 , using the IDoc model API in Java. Example 290.1. Creating an IDoc Document in Java Document attributes Table 290.2, "IDoc Document Attributes" shows the control record attributes that you can set on the Document object. Table 290.2. IDoc Document Attributes Attribute Length SAP Field Description archiveKey 70 ARCKEY EDI archive key client 3 MANDT Client creationDate 8 CREDAT Date IDoc was created creationTime 6 CRETIM Time IDoc was created direction 1 DIRECT Direction eDIMessage 14 REFMES Reference to message eDIMessageGroup 14 REFGRP Reference to message group eDIMessageType 6 STDMES EDI message type eDIStandardFlag 1 STD EDI standard eDIStandardVersion 6 STDVRS Version of EDI standard eDITransmissionFile 14 REFINT Reference to interchange file iDocCompoundType 8 DOCTYP IDoc type iDocNumber 16 DOCNUM IDoc number iDocSAPRelease 4 DOCREL SAP Release of IDoc iDocType 30 IDOCTP Name of basic IDoc type iDocTypeExtension 30 CIMTYP Name of extension type messageCode 3 MESCOD Logical message code messageFunction 3 MESFCT Logical message function messageType 30 MESTYP Logical message type outputMode 1 OUTMOD Output mode recipientAddress 10 RCVSAD Receiver address (SADR) recipientLogicalAddress 70 RCVLAD Logical address of receiver recipientPartnerFunction 2 RCVPFC Partner function of receiver recipientPartnerNumber 10 RCVPRN Partner number of receiver recipientPartnerType 2 RCVPRT Partner type of receiver recipientPort 10 RCVPOR Receiver port (SAP System, EDI subsystem) senderAddress SNDSAD Sender address (SADR) senderLogicalAddress 70 SNDLAD Logical address of sender senderPartnerFunction 2 SNDPFC Partner function of sender senderPartnerNumber 10 SNDPRN Partner number of sender senderPartnerType 2 SNDPRT Partner type of sender senderPort 10 SNDPOR Sender port (SAP System, EDI subsystem) serialization 20 SERIAL EDI/ALE: Serialization field status 2 STATUS Status of IDoc testFlag 1 TEST Test flag Setting document attributes in Java When setting the control record attributes in Java (from Table 290.2, "IDoc Document Attributes" ), the usual convention for Java bean properties is followed. That is, a name attribute can be accessed through the getName and setName methods, for getting and setting the attribute value. For example, the iDocType , iDocTypeExtension , and messageType attributes can be set as follows on a Document object: // Java document.setIDocType("FLCUSTOMER_CREATEFROMDATA01"); document.setIDocTypeExtension(""); document.setMessageType("FLCUSTOMER_CREATEFROMDATA"); Setting document attributes in XML When setting the control record attributes in XML, the attributes must be set on the idoc:Document element. For example, the iDocType , iDocTypeExtension , and messageType attributes can be set as follows: <?xml version="1.0" encoding="ASCII"?> <idoc:Document ... iDocType="FLCUSTOMER_CREATEFROMDATA01" iDocTypeExtension="" messageType="FLCUSTOMER_CREATEFROMDATA" ... > ... </idoc:Document> 290.11. Transaction Support BAPI transaction model The SAP Component supports the BAPI transaction model for outbound communication with SAP. A destination endpoint with a URL containing the transacted option set to true initiates a stateful session on the outbound connection of the endpoint and registers a Camel Synchronization object with the exchange. This synchronization object calls the BAPI service method BAPI_TRANSACTION_COMMIT and end the stateful session when the processing of the message exchange is complete. If the processing of the message exchange fails, the synchronization object calls the BAPI server method BAPI_TRANSACTION_ROLLBACK and end the stateful session. RFC transaction model The tRFC protocol accomplishes an AT-MOST-ONCE delivery and processing guarantee by identifying each transactional request with a unique transaction identifier (TID). A TID accompanies each request sent in the protocol. A sending application using the tRFC protocol must identify each instance of a request with a unique TID when sending the request. An application may send a request with a given TID multiple times, but the protocol ensures that the request is delivered and processed in the receiving system at most once. An application may choose to resend a request with a given TID when encountering a communication or system error when sending the request, and is thus in doubt as to whether that request was delivered and processed in the receiving system. By resending a request when encountering an communication error, a client application using the tRFC protocol can thus ensure EXACTLY-ONCE delivery and processing guarantees for its request. Which transaction model to use? A BAPI transaction is an application level transaction, in the sense that it imposes ACID guarantees on the persistent data changes performed by a BAPI method or RFC function in the SAP database. An RFC transaction is a communication transaction, in the sense that it imposes delivery guarantees (AT-MOST-ONCE, EXACTLY-ONCE, EXACTLY-ONCE-IN-ORDER) on requests to a BAPI method and/or RFC function. Transactional RFC destination endpoints The following destination endpoints support RFC transactions: sap-trfc-destination sap-qrfc-destination A single Camel route can include multiple transactional RFC destination endpoints, sending messages to multiple RFC destinations and even sending messages to the same RFC destination multiple times. This implies that the Camel SAP component potentially needs to keep track of many transaction IDs (TIDs) for each Exchange object passing along a route. Now if the route processing fails and must be retried, the situation gets quite complicated. The RFC transaction semantics demand that each RFC destination along the route must be invoked using the same TID that was used the first time around (and where the TIDs for each of the destinations are distinct from each other). In other words, the Camel SAP component must keep track of which TID was used at which point along the route, and remember this information, so that the TIDs can be replayed in the correct order. By default, Camel does not provide a mechanism that enables an Exchange to know where it is in a route. To provide such a mechanism, it is necessary to install the CurrentProcessorDefinitionInterceptStrategy interceptor into the Camel runtime. This interceptor must be installed into the Camel runtime, to to keep track of the TIDs in a route with the Camel SAP component. For details of how to configure the interceptor, see the section called "Interceptor for tRFC and qRFC destinations" . Transactional RFC server endpoints The following server endpoints support RFC transactions: sap-trfc-server When a Camel exchange processing a transactional request encounters a processing error, Camel handles the processing error through its standard error handling mechanisms. If the Camel route processing the exchange is configured to propagate the error back to the caller, the SAP server endpoint that initiated the exchange takes note of the failure and the sending SAP system is notified of the error. The sending SAP system can then respond by sending another transaction request with the same TID to process the request again. 290.12. XML Serialization for RFC Overview SAP request and response objects support an XML serialization format which enable these objects to be serialized to and from an XML document. XML namespace Each RFC in a repository defines a specific XML namespace for the elements which compose the serialized forms of its Request and Response objects. The form of this namespace URL is as follows: RFC namespace URLs have a common http://sap.fusesource.org/rfc prefix followed by the name of the repository in which the RFC's metadata is defined. The final component in the URL is the name of the RFC itself. Request and response XML documents An SAP request object is serialized into an XML document with the root element of that document named Request and scoped by the namespace of the request's RFC. <?xml version="1.0" encoding="ASCII"?> <BOOK_FLIGHT:Request xmlns:BOOK_FLIGHT="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT"> ... </BOOK_FLIGHT:Request> An SAP response object is serialized into an XML document with the root element of that document named Response and scoped by the namespace of the response's RFC. <?xml version="1.0" encoding="ASCII"?> <BOOK_FLIGHT:Response xmlns:BOOK_FLIGHT="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT"> ... </BOOK_FLIGHT:Response> Structure fields Structure fields in parameter lists or nested structures are serialized as elements. The element name of the serialized structure corresponds to the field name of the structure within the enclosing parameter list, structure or table row entry it resides. <BOOK_FLIGHT:FLTINFO xmlns:BOOK_FLIGHT="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT"> ... </BOOK_FLIGHT:FLTINFO> Note The type name of the structure element in the RFC namespace corresponds to the name of the record metadata object which defines the structure, as in the following example: <xs:schema targetNamespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT"> xmlns:xs="http://www.w3.org/2001/XMLSchema"> ... <xs:complexType name="FLTINFO_STRUCTURE"> ... </xs:complexType> ... </xs:schema> This distinction is important when specifying a JAXB bean to marshal and unmarshal the structure as is seen in Section 290.14.3, "Example 3: Handling Requests from SAP" . Table fields Table fields in parameter lists or nested structures are serialized as elements. The element name of the serialized structure corresponds to the field name of the table within the enclosing parameter list, structure, or table row entry it resides. The table element contains a series of row elements to hold the serialized values of the table's row entries. <BOOK_FLIGHT:CONNINFO xmlns:BOOK_FLIGHT="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT"> <row ... > ... </row> ... <row ... > ... </row> </BOOK_FLIGHT:CONNINFO> Note The type name of the table element in the RFC namespace corresponds to the name of the record metadata object which defines the row structure of the table suffixed by _TABLE . The type name of the table row element in the RFC name corresponds to the name of the record metadata object which defines the row structure of the table, as in the following example: <xs:schema targetNamespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT" xmlns:xs="http://www.w3.org/2001/XMLSchema"> ... <xs:complextType name="CONNECTION_INFO_STRUCTURE_TABLE"> <xs:sequence> <xs:element name="row" minOccures="0" maxOccurs="unbounded" type="CONNECTION_INFO_STRUCTURE"/> ... <xs:sequence> </xs:sequence> </xs:complexType> <xs:complextType name="CONNECTION_INFO_STRUCTURE"> ... </xs:complexType> ... </xs:schema> This distinction is important when specifying a JAXB bean to marshal and unmarshal the structure as is seen in Section 290.14.3, "Example 3: Handling Requests from SAP" . Elementary fields Elementary fields in parameter lists or nested structures are serialized as attributes on the element of the enclosing parameter list or structure. The attribute name of the serialized field corresponds to the field name of the field within the enclosing parameter list, structure, or table row entry it resides, as in the following example: <?xml version="1.0" encoding="ASCII"?> <BOOK_FLIGHT:Request xmlns:BOOK_FLIGHT="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT" CUSTNAME="James Legrand" PASSFORM="Mr" PASSNAME="Travelin Joe" PASSBIRTH="1990-03-17T00:00:00.000-0500" FLIGHTDATE="2014-03-19T00:00:00.000-0400" TRAVELAGENCYNUMBER="00000110" DESTINATION_FROM="SFO" DESTINATION_TO="FRA"/> Date and time formats Date and Time fields are serialized into attribute values using the following format: Date fields is serialized with only the year, month, day and timezone components set: DEPDATE="2014-03-19T00:00:00.000-0400" Time fields is serialized with only the hour, minute, second, millisecond and timezone components set: DEPTIME="1970-01-01T16:00:00.000-0500" 290.13. XML Serialization for IDoc Overview An IDoc message body can be serialized into an XML string format, with the help of a built-in type converter. XML namespace Each serialized IDoc is associated with an XML namespace, which has the following general format: Both the repositoryName (name of the remote SAP metadata repository) and the idocType (IDoc document type) are mandatory, but the other components of the namespace can be left blank. For example, you could have an XML namespace like the following: http://sap.fusesource.org/idoc/MY_REPO/FLCUSTOMER_CREATEFROMDATA01/// Built-in type converter The Camel SAP component has a built-in type converter, which is capable of converting a Document object or a DocumentList object to and from a String type. For example, to serialize a Document object to an XML string, you can simply add the following line to a route in XML DSL: <convertBodyTo type="java.lang.String"/> You can also use this approach to a serialized XML message into a Document object. For example, given that the current message body is a serialized XML string, you can convert it back into a Document object by adding the following line to a route in XML DSL: <convertBodyTo type="org.fusesource.camel.component.sap.model.idoc.Document"/> Sample IDoc message body in XML format When you convert an IDoc message to a String , it is serialized into an XML document, where the root element is either idoc:Document (for a single document) or idoc:DocumentList (for a list of documents). Example 290.2, "IDoc Message Body in XML" shows a single IDoc document that has been serialized to an idoc:Document element. Example 290.2. IDoc Message Body in XML <?xml version="1.0" encoding="ASCII"?> <idoc:Document xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:FLCUSTOMER_CREATEFROMDATA01---="http://sap.fusesource.org/idoc/XXX/FLCUSTOMER_CREATEFROMDATA01///" xmlns:idoc="http://sap.fusesource.org/idoc" creationDate="2015-01-28T12:39:13.980-0500" creationTime="2015-01-28T12:39:13.980-0500" iDocType="FLCUSTOMER_CREATEFROMDATA01" iDocTypeExtension="" messageType="FLCUSTOMER_CREATEFROMDATA" recipientPartnerNumber="QUICKCLNT" recipientPartnerType="LS" senderPartnerNumber="QUICKSTART" senderPartnerType="LS"> <rootSegment xsi:type="FLCUSTOMER_CREATEFROMDATA01---:ROOT" document="/"> <segmentChildren parent="//@rootSegment"> <E1SCU_CRE parent="//@rootSegment" document="/"> <segmentChildren parent="//@rootSegment/@segmentChildren/@E1SCU_CRE.0"> <E1BPSCUNEW parent="//@rootSegment/@segmentChildren/@E1SCU_CRE.0" document="/" CUSTNAME="Fred Flintstone" FORM="Mr." STREET="123 Rubble Lane" POSTCODE="01234" CITY="Bedrock" COUNTR="US" PHONE="800-555-1212" EMAIL="[email protected]" CUSTTYPE="P" DISCOUNT="005" LANGU="E"/> </segmentChildren> </E1SCU_CRE> </segmentChildren> </rootSegment> </idoc:Document> 290.14. SAP Examples 290.14.1. Example 1: Reading Data from SAP Overview This example demonstrates a route that reads FlightCustomer business object data from SAP. The route invokes the FlightCustomer BAPI method, BAPI_FLCUST_GETLIST , using a SAP synchronous RFC destination endpoint to retrieve the data. Java DSL for route The Java DSL for the example route is as follows: from("direct:getFlightCustomerInfo") .to("bean:createFlightCustomerGetListRequest") .to("sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST") .to("bean:returnFlightCustomerInfo"); XML DSL for route And the Spring DSL for the same route is as follows: <route> <from uri="direct:getFlightCustomerInfo"/> <to uri="bean:createFlightCustomerGetListRequest"/> <to uri="sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST"/> <to uri="bean:returnFlightCustomerInfo"/> </route> createFlightCustomerGetListRequest bean The createFlightCustomerGetListRequest bean is responsible for building a SAP request object in its exchange method used in the RFC call of the subsequent SAP endpoint . The following code snippet demonstrates the sequence of operations to build the request object: public void create(Exchange exchange) throws Exception { // Get SAP Endpoint to be called from context. SapSynchronousRfcDestinationEndpoint endpoint = exchange.getContext().getEndpoint("sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST", SapSynchronousRfcDestinationEndpoint.class); // Retrieve bean from message containing Flight Customer name to // look up. BookFlightRequest bookFlightRequest = exchange.getIn().getBody(BookFlightRequest.class); // Create SAP Request object from target endpoint. Structure request = endpoint.getRequest(); // Add Customer Name to request if set if (bookFlightRequest.getCustomerName() != null && bookFlightRequest.getCustomerName().length() > 0) { request.put("CUSTOMER_NAME", bookFlightRequest.getCustomerName()); } } else { throw new Exception("No Customer Name"); } // Put request object into body of exchange message. exchange.getIn().setBody(request); } returnFlightCustomerInfo bean The returnFlightCustomerInfo bean is responsible for extracting data from the SAP response object in its exchange method that it receives from the SAP endpoint. The following code snippet demonstrates the sequence of operations to extract the data from the response object: public void createFlightCustomerInfo(Exchange exchange) throws Exception { // Retrieve SAP response object from body of exchange message. Structure flightCustomerGetListResponse = exchange.getIn().getBody(Structure.class); if (flightCustomerGetListResponse == null) { throw new Exception("No Flight Customer Get List Response"); } // Check BAPI return parameter for errors @SuppressWarnings("unchecked") Table<Structure> bapiReturn = flightCustomerGetListResponse.get("RETURN", Table.class); Structure bapiReturnEntry = bapiReturn.get(0); if (bapiReturnEntry.get("TYPE", String.class) != "S") { String message = bapiReturnEntry.get("MESSAGE", String.class); throw new Exception("BAPI call failed: " + message); } // Get customer list table from response object. @SuppressWarnings("unchecked") Table<? extends Structure> customerList = flightCustomerGetListResponse.get("CUSTOMER_LIST", Table.class); if (customerList == null || customerList.size() == 0) { throw new Exception("No Customer Info."); } // Get Flight Customer data from first row of table. Structure customer = customerList.get(0); // Create bean to hold Flight Customer data. FlightCustomerInfo flightCustomerInfo = new FlightCustomerInfo(); // Get customer id from Flight Customer data and add to bean. String customerId = customer.get("CUSTOMERID", String.class); if (customerId != null) { flightCustomerInfo.setCustomerNumber(customerId); } ... // Put bean into body of exchange message. exchange.getIn().setHeader("flightCustomerInfo", flightCustomerInfo); } 290.14.2. Example 2: Writing Data to SAP Overview This example demonstrates a route that creates a FlightTrip business object instance in SAP. The route invokes the FlightTrip BAPI method, BAPI_FLTRIP_CREATE , using a destination endpoint to create the object. Java DSL for route The Java DSL for the example route is as follows: from("direct:createFlightTrip") .to("bean:createFlightTripRequest") .to("sap-srfc-destination:nplDest:BAPI_FLTRIP_CREATE?transacted=true") .to("bean:returnFlightTripResponse"); XML DSL for route And the Spring DSL for the same route is as follows: <route> <from uri="direct:createFlightTrip"/> <to uri="bean:createFlightTripRequest"/> <to uri="sap-srfc-destination:nplDest:BAPI_FLTRIP_CREATE?transacted=true"/> <to uri="bean:returnFlightTripResponse"/> </route> Transaction support Note The URL for the SAP endpoint has the transacted option set to true . As discussed in Section 290.11, "Transaction Support" , when this option is enabled the endpoint ensures that a SAP transaction session has been initiated before invoking the RFC call. Because this endpoint's RFC creates new data in SAP, this option is necessary to make the route's changes permanent in SAP. Populating request parameters The createFlightTripRequest and returnFlightTripResponse beans are responsible for populating request parameters into the SAP request and extracting response parameters from the SAP response respectively, following the same sequence of operations as demonstrated in the example. 290.14.3. Example 3: Handling Requests from SAP Overview This example demonstrates a route that handles a request from SAP to the BOOK_FLIGHT RFC, which is implemented by the route. In addition, it demonstrates the component's XML serialization support, using JAXB to unmarshal and marshal SAP request objects and response objects to custom beans. This route creates a FlightTrip business object on behalf of a travel agent, FlightCustomer . The route first unmarshals the SAP request object received by the SAP server endpoint into a custom JAXB bean. This custom bean is then multicasted in the exchange to three sub-routes, which gather the travel agent, flight connection, and passenger information required to create the flight trip. The final sub-route creates the flight trip object in SAP, as demonstrated in the example. The final sub-route also creates and returns a custom JAXB bean which is marshaled into a SAP response object and returned by the server endpoint. Java DSL for route The Java DSL for the example route is as follows: DataFormat jaxb = new JaxbDataFormat("org.fusesource.sap.example.jaxb"); from("sap-srfc-server:nplserver:BOOK_FLIGHT") .unmarshal(jaxb) .multicast() .to("direct:getFlightConnectionInfo", "direct:getFlightCustomerInfo", "direct:getPassengerInfo") .end() .to("direct:createFlightTrip") .marshal(jaxb); XML DSL for route And the XML DSL for the same route is as follows: <route> <from uri="sap-srfc-server:nplserver:BOOK_FLIGHT"/> <unmarshal> <jaxb contextPath="org.fusesource.sap.example.jaxb"/> </unmarshal> <multicast> <to uri="direct:getFlightConnectionInfo"/> <to uri="direct:getFlightCustomerInfo"/> <to uri="direct:getPassengerInfo"/> </multicast> <to uri="direct:createFlightTrip"/> <marshal> <jaxb contextPath="org.fusesource.sap.example.jaxb"/> </marshal> </route> BookFlightRequest bean The following listing illustrates a JAXB bean which unmarshals from the serialized form of a SAP BOOK_FLIGHT request object: @XmlRootElement(name="Request", namespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT") @XmlAccessorType(XmlAccessType.FIELD) public class BookFlightRequest { @XmlAttribute(name="CUSTNAME") private String customerName; @XmlAttribute(name="FLIGHTDATE") @XmlJavaTypeAdapter(DateAdapter.class) private Date flightDate; @XmlAttribute(name="TRAVELAGENCYNUMBER") private String travelAgencyNumber; @XmlAttribute(name="DESTINATION_FROM") private String startAirportCode; @XmlAttribute(name="DESTINATION_TO") private String endAirportCode; @XmlAttribute(name="PASSFORM") private String passengerFormOfAddress; @XmlAttribute(name="PASSNAME") private String passengerName; @XmlAttribute(name="PASSBIRTH") @XmlJavaTypeAdapter(DateAdapter.class) private Date passengerDateOfBirth; @XmlAttribute(name="CLASS") private String flightClass; ... } BookFlightResponse bean The following listing illustrates a JAXB bean which marshals to the serialized form of a SAP BOOK_FLIGHT response object: @XmlRootElement(name="Response", namespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT") @XmlAccessorType(XmlAccessType.FIELD) public class BookFlightResponse { @XmlAttribute(name="TRIPNUMBER") private String tripNumber; @XmlAttribute(name="TICKET_PRICE") private BigDecimal ticketPrice; @XmlAttribute(name="TICKET_TAX") private BigDecimal ticketTax; @XmlAttribute(name="CURRENCY") private String currency; @XmlAttribute(name="PASSFORM") private String passengerFormOfAddress; @XmlAttribute(name="PASSNAME") private String passengerName; @XmlAttribute(name="PASSBIRTH") @XmlJavaTypeAdapter(DateAdapter.class) private Date passengerDateOfBirth; @XmlElement(name="FLTINFO") private FlightInfo flightInfo; @XmlElement(name="CONNINFO") private ConnectionInfoTable connectionInfo; ... } Note The complex parameter fields of the response object are serialized as child elements of the response. FlightInfo bean The following listing illustrates a JAXB bean which marshals to the serialized form of the complex structure parameter FLTINFO : @XmlRootElement(name="FLTINFO", namespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT") @XmlAccessorType(XmlAccessType.FIELD) public class FlightInfo { @XmlAttribute(name="FLIGHTTIME") private String flightTime; @XmlAttribute(name="CITYFROM") private String cityFrom; @XmlAttribute(name="DEPDATE") @XmlJavaTypeAdapter(DateAdapter.class) private Date departureDate; @XmlAttribute(name="DEPTIME") @XmlJavaTypeAdapter(DateAdapter.class) private Date departureTime; @XmlAttribute(name="CITYTO") private String cityTo; @XmlAttribute(name="ARRDATE") @XmlJavaTypeAdapter(DateAdapter.class) private Date arrivalDate; @XmlAttribute(name="ARRTIME") @XmlJavaTypeAdapter(DateAdapter.class) private Date arrivalTime; ... } ConnectionInfoTable bean The following listing illustrates a JAXB bean which marshals to the serialized form of the complex table parameter, CONNINFO . Note The name of the root element type of the JAXB bean corresponds to the name of the row structure type suffixed with _TABLE and the bean contains a list of row elements. @XmlRootElement(name="CONNINFO_TABLE", namespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT") @XmlAccessorType(XmlAccessType.FIELD) public class ConnectionInfoTable { @XmlElement(name="row") List<ConnectionInfo> rows; ... } ConnectionInfo bean The following listing illustrates a JAXB bean, which marshals to the serialized form of the above tables row elements: @XmlRootElement(name="CONNINFO", namespace="http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT") @XmlAccessorType(XmlAccessType.FIELD) public class ConnectionInfo { @XmlAttribute(name="CONNID") String connectionId; @XmlAttribute(name="AIRLINE") String airline; @XmlAttribute(name="PLANETYPE") String planeType; @XmlAttribute(name="CITYFROM") String cityFrom; @XmlAttribute(name="DEPDATE") @XmlJavaTypeAdapter(DateAdapter.class) Date departureDate; @XmlAttribute(name="DEPTIME") @XmlJavaTypeAdapter(DateAdapter.class) Date departureTime; @XmlAttribute(name="CITYTO") String cityTo; @XmlAttribute(name="ARRDATE") @XmlJavaTypeAdapter(DateAdapter.class) Date arrivalDate; @XmlAttribute(name="ARRTIME") @XmlJavaTypeAdapter(DateAdapter.class) Date arrivalTime; ... }
[ "<dependency> <groupId>org.fusesource</groupId> <artifactId>camel-sap</artifactId> <version>x.x.x</version> <dependency>", ":experimental: // Standard document attributes to be used in the documentation // // The following are shared by all documents :toc: :toclevels: 4 :numbered:", "org.osgi.framework.system.packages.extra ... , com.sap.conn.idoc, com.sap.conn.idoc.jco, com.sap.conn.jco, com.sap.conn.jco.ext, com.sap.conn.jco.monitor, com.sap.conn.jco.rt, com.sap.conn.jco.server", "JBossFuse:karaf@root> features:install camel-sap", "cp sapjco3.jar sapidoc3.jar USDJBOSS_HOME/modules/system/layers/fuse/com/sap/conn/jco/main/ mkdir -p USDJBOSS_HOME/modules/system/layers/fuse/com/sap/conn/jco/main/lib/linux-x86_64 cp libsapjco3.so USDJBOSS_HOME/modules/system/layers/fuse/com/sap/conn/jco/main/lib/linux-x86_64/", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <module xmlns=\"urn:jboss:module:1.1\" name=\"org.wildfly.camel.extras\"> <dependencies> <module name=\"org.fusesource.camel.component.sap\" export=\"true\" services=\"export\" /> </dependencies> </module>", "<dependency> <groupId>org.fusesource</groupId> <artifactId>camel-sap-starter</artifactId> <exclusions> <exclusion> <groupId>com.sap.conn.idoc</groupId> <artifactId>sapidoc3</artifactId> </exclusion> <exclusion> <groupId>com.sap.conn.jco</groupId> <artifactId>sapjco3</artifactId> </exclusion> </exclusions> </dependency>", "src └── lib └── amd64.com.sap.conn β”œβ”€β”€ idoc β”‚ └── sapidoc3.jar └── jco β”œβ”€β”€ sapjco3.jar └── sapjco3.so", "<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifestEntries> <Class-Path>lib/USD{os.arch}/sapjco3.jar lib/USD{os.arch}/sapidoc3.jar</Class-Path> </manifestEntries> </archive> </configuration> </plugin>", "<plugin> <artifactId>maven-resources-plugin</artifactId> <executions> <execution> <id>copy-resources01</id> <phase>process-classes</phase> <goals> <goal>copy-resources</goal> </goals> <configuration> <outputDirectory>USD{basedir}/target/lib</outputDirectory> <encoding>UTF-8</encoding> <resources> <resource> <directory>USD{basedir}/lib</directory> <includes> <include>**/**</include> </includes> </resource> </resources> </configuration> </execution> </executions> </plugin>", "new-build --binary=true --image-stream=\"<current_Fuse_Java_OpenShift_Imagestream_version>\" --name=<application_name> -e \"ARTIFACT_COPY_ARGS=-a .\" -e \"MAVEN_ARGS_APPEND=<additional_args> -e \"ARTIFACT_DIR=<relative_path_of_target_directory>\"", "new-build --binary=true --image-stream=\"fuse7-java-openshift:1.4\" --name=sapik6 -e \"ARTIFACT_COPY_ARGS=-a .\" -e \"MAVEN_ARGS_APPEND=-pl spring-boot/sap-srfc-destination-spring-boot\" -e \"ARTIFACT_DIR=spring-boot/sap-srfc-destination-spring-boot/target\"", "start-build sapik6 --from-dir=.", "new-app --image-stream=<name>:<version>", "new-app --image-stream=sapik6:latest", "src └── lib └── amd64.com.sap.conn β”œβ”€β”€ idoc β”‚ └── sapidoc3.jar └── jco β”œβ”€β”€ sapjco3.jar └── sapjco3.so", "<dependency> <groupId>org.fusesource</groupId> <artifactId>camel-sap-starter</artifactId> <exclusions> <exclusion> <groupId>com.sap.conn.idoc</groupId> <artifactId>sapidoc3</artifactId> </exclusion> <exclusion> <groupId>com.sap.conn.jco</groupId> <artifactId>sapjco3</artifactId> </exclusion> </exclusions> </dependency>", "<resources> <resource> <directory>src/lib/USD{os.arch}/com/sap/conn/idoc</directory> <targetPath>BOOT-INF/lib</targetPath> <includes> <include>*.jar</include> </includes> </resource> <resource> <directory>src/lib/USD{os.arch}/com/sap/conn/jco</directory> <targetPath>BOOT-INF/lib</targetPath> <includes> <include>*.jar</include> </includes> </resource> </resources>", "<plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.4.0</version> <configuration> <images>  </images> </configuration> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> <goal>apply</goal> </goals> </execution> </executions> </plugin>", "cd <sap_application_path>", "new-project streams", "import-image streams/fuse7-java-openshift:1.11 --from=registry.redhat.io/fuse7/fuse-java-openshift-rhel8:1.11-32 --confirm -n streams (JDK8)", "new-project <your_project>", "mvn clean oc:deploy -Djkube.docker.imagePullPolicy=Always -Popenshift -Djkube.generator.from=streams/fuse7-java-openshift:1.11 -Djkube.resourceDir=./src/main/jkube -Djkube.openshiftManifest=target/classes/META-INF/jkube/openshift.yml -Djkube.generator.fromMode=istag", "sap-srfc-destination: destinationName : rfcName sap-trfc-destination: destinationName : rfcName sap-qrfc-destination: destinationName : queueName : rfcName sap-srfc-server: serverName : rfcName [? options ] sap-trfc-server: serverName : rfcName [? options ]", "sap-idoc-destination: destinationName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-idoclist-destination: destinationName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-qidoc-destination: destinationName : queueName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-qidoclist-destination: destinationName : queueName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-idoclist-server: serverName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]][? options ]", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Configures the Inbound and Outbound SAP Connections --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"destinationDataStore\"> <map> <entry key=\"quickstartDest\" value-ref=\"quickstartDestinationData\" /> </map> </property> <property name=\"serverDataStore\"> <map> <entry key=\"quickstartServer\" value-ref=\"quickstartServerData\" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id=\"quickstartDestinationData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl\"> <property name=\"ashost\" value=\"example.com\" /> <property name=\"sysnr\" value=\"00\" /> <property name=\"client\" value=\"000\" /> <property name=\"user\" value=\"username\" /> <property name=\"passwd\" value=\"passowrd\" /> <property name=\"lang\" value=\"en\" /> </bean> <!-- Configures an Inbound SAP Connection --> <!-- *** Please enter the connection property values for your environment ** --> <bean id=\"quickstartServerData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl\"> <property name=\"gwhost\" value=\"example.com\" /> <property name=\"gwserv\" value=\"3300\" /> <!-- Do not change the following property values --> <property name=\"progid\" value=\"QUICKSTART\" /> <property name=\"repositoryDestination\" value=\"quickstartDest\" /> <property name=\"connectionCount\" value=\"2\" /> </bean> </blueprint>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Create interceptor to support tRFC processing --> <bean id=\"currentProcessorDefinitionInterceptor\" class=\"org.fusesource.camel.component.sap.CurrentProcessorDefinitionInterceptStrategy\" /> <!-- Configures the Inbound and Outbound SAP Connections --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"destinationDataStore\"> <map> <entry key=\"quickstartDest\" value-ref=\"quickstartDestinationData\" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id=\"quickstartDestinationData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl\"> <property name=\"ashost\" value=\"example.com\" /> <property name=\"sysnr\" value=\"00\" /> <property name=\"client\" value=\"000\" /> <property name=\"user\" value=\"username\" /> <property name=\"passwd\" value=\"password\" /> <property name=\"lang\" value=\"en\" /> </bean> </blueprint>", "sap-srfc-destination:quickstartDest:BAPI_FLCUST_GETLIST", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Configures the Inbound and Outbound SAP Connections --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"destinationDataStore\"> <map> <entry key=\"quickstartDest\" value-ref=\"quickstartDestinationData\" /> </map> </property> <property name=\"serverDataStore\"> <map> <entry key=\"quickstartServer\" value-ref=\"quickstartServerData\" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id=\"quickstartDestinationData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl\"> <property name=\"ashost\" value=\"example.com\" /> <property name=\"sysnr\" value=\"00\" /> <property name=\"client\" value=\"000\" /> <property name=\"user\" value=\"username\" /> <property name=\"passwd\" value=\"passowrd\" /> <property name=\"lang\" value=\"en\" /> </bean> <!-- Configures an Inbound SAP Connection --> <!-- *** Please enter the connection property values for your environment ** --> <bean id=\"quickstartServerData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl\"> <property name=\"gwhost\" value=\"example.com\" /> <property name=\"gwserv\" value=\"3300\" /> <!-- Do not change the following property values --> <property name=\"progid\" value=\"QUICKSTART\" /> <property name=\"repositoryDestination\" value=\"quickstartDest\" /> <property name=\"connectionCount\" value=\"2\" /> </bean> </blueprint>", "sap-srfc-server:quickstartServer:BAPI_FLCUST_GETLIST", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Configures the sap-srfc-server component --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"repositoryDataStore\"> <map> <entry key=\"nplServer\" value-ref=\"nplRepositoryData\" /> </map> </property> </bean> <!-- Configures a metadata Repository --> <bean id=\"nplRepositoryData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.RepositoryDataImpl\"> <property name=\"functionTemplates\"> <map> <entry key=\"BOOK_FLIGHT\" value-ref=\"bookFlightFunctionTemplate\" /> </map> </property> </bean> </blueprint>", "<bean id=\"bookFlightFunctionTemplate\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.FunctionTemplateImpl\"> <property name=\"importParameterList\"> <list> </list> </property> <property name=\"changingParameterList\"> <list> </list> </property> <property name=\"exportParameterList\"> <list> </list> </property> <property name=\"tableParameterList\"> <list> </list> </property> <property name=\"exceptionList\"> <list> </list> </property> </bean>", "<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMetaDataImpl\"> <property name=\"name\" value=\"TICKET_PRICE\" /> <property name=\"type\" value=\"BCD\" /> <property name=\"byteLength\" value=\"12\" /> <property name=\"unicodeByteLength\" value=\"24\" /> <property name=\"decimals\" value=\"2\" /> <property name=\"optional\" value=\"true\" /> </bean>", "<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMetaDataImpl\"> <property name=\"name\" value=\"CONNINFO\" /> <property name=\"type\" value=\"TABLE\" /> <property name=\"recordMetaData\" ref=\"connectionInfo\" /> </bean>", "<bean id=\"connectionInfo\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl\"> <property name=\"name\" value=\"CONNECTION_INFO\" /> <property name=\"recordFieldMetaData\"> <list> </list> </property> </bean>", "<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl\"> <property name=\"name\" value=\"ARRDATE\" /> <property name=\"type\" value=\"DATE\" /> <property name=\"byteLength\" value=\"8\" /> <property name=\"unicodeByteLength\" value=\"16\" /> <property name=\"byteOffset\" value=\"85\" /> <property name=\"unicodeByteOffset\" value=\"170\" /> </bean>", "<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl\"> <property name=\"name\" value=\"FLTINFO\" /> <property name=\"type\" value=\"STRUCTURE\" /> <property name=\"byteOffset\" value=\"0\" /> <property name=\"unicodeByteOffset\" value=\"0\" /> <property name=\"recordMetaData\" ref=\"flightInfo\" /> </bean>", "public class SAPEndpoint { public Structure getRequest() throws Exception; public Structure getResponse() throws Exception; }", "public interface Structure extends org.eclipse.emf.ecore.EObject, java.util.Map<String, Object> { <T> T get(Object key, Class<T> type); }", "public interface Table<S extends Structure> extends org.eclipse.emf.ecore.EObject, java.util.List<S> { /** * Creates and adds table row at end of row list */ S add(); /** * Creates and adds table row at index in row list */ S add(int index); }", "org.fusesource.camel.component.sap.model.idoc.Document", "org.fusesource.camel.component.sap.model.idoc.DocumentList", "org.fusesource.camel.component.sap.model.idoc.Document org.fusesource.camel.component.sap.model.idoc.Segment", "// Java package org.fusesource.camel.component.sap.model.idoc; public interface Document extends EObject { // Access the field values from the IDoc control record String getArchiveKey(); void setArchiveKey(String value); String getClient(); void setClient(String value); // Access the IDoc document contents Segment getRootSegment(); }", "// Java package org.fusesource.camel.component.sap.model.idoc; public interface Segment extends EObject, java.util.Map<String, Object> { // Returns the value of the '<em><b>Parent</b></em>' reference. Segment getParent(); // Return a immutable list of all child segments <S extends Segment> EList<S> getChildren(); // Returns a list of child segments of the specified segment type. <S extends Segment> SegmentList<S> getChildren(String segmentType); EList<String> getTypes(); Document getDocument(); String getDescription(); String getType(); String getDefinition(); int getHierarchyLevel(); String getIdocType(); String getIdocTypeExtension(); String getSystemRelease(); String getApplicationRelease(); int getNumFields(); long getMaxOccurrence(); long getMinOccurrence(); boolean isMandatory(); boolean isQualified(); int getRecordLength(); <T> T get(Object key, Class<T> type); }", "// Java package org.fusesource.camel.component.sap.model.idoc; public interface SegmentList<S extends Segment> extends EObject, EList<S> { S add(); S add(int index); }", "Segment rootSegment = document.getRootSegment(); Segment E1SCU_CRE_Segment = rootSegment.getChildren(\"E1SCU_CRE\").add();", "// Java import org.fusesource.camel.component.sap.model.idoc.Document; import org.fusesource.camel.component.sap.model.idoc.Segment; import org.fusesource.camel.component.sap.util.IDocUtil; import org.fusesource.camel.component.sap.model.idoc.Document; import org.fusesource.camel.component.sap.model.idoc.DocumentList; import org.fusesource.camel.component.sap.model.idoc.IdocFactory; import org.fusesource.camel.component.sap.model.idoc.IdocPackage; import org.fusesource.camel.component.sap.model.idoc.Segment; import org.fusesource.camel.component.sap.model.idoc.SegmentChildren; // // Create a new IDoc instance using the modelling classes // // Get the SAP Endpoint bean from the Camel context. // In this example, it's a 'sap-idoc-destination' endpoint. SapTransactionalIDocDestinationEndpoint endpoint = exchange.getContext().getEndpoint( \"bean: SapEndpointBeanID \", SapTransactionalIDocDestinationEndpoint.class ); // The endpoint automatically populates some required control record attributes Document document = endpoint.createDocument() // Initialize additional control record attributes document.setMessageType(\"FLCUSTOMER_CREATEFROMDATA\"); document.setRecipientPartnerNumber(\"QUICKCLNT\"); document.setRecipientPartnerType(\"LS\"); document.setSenderPartnerNumber(\"QUICKSTART\"); document.setSenderPartnerType(\"LS\"); Segment rootSegment = document.getRootSegment(); Segment E1SCU_CRE_Segment = rootSegment.getChildren(\"E1SCU_CRE\").add(); Segment E1BPSCUNEW_Segment = E1SCU_CRE_Segment.getChildren(\"E1BPSCUNEW\").add(); E1BPSCUNEW_Segment.put(\"CUSTNAME\", \"Fred Flintstone\"); E1BPSCUNEW_Segment.put(\"FORM\", \"Mr.\"); E1BPSCUNEW_Segment.put(\"STREET\", \"123 Rubble Lane\"); E1BPSCUNEW_Segment.put(\"POSTCODE\", \"01234\"); E1BPSCUNEW_Segment.put(\"CITY\", \"Bedrock\"); E1BPSCUNEW_Segment.put(\"COUNTR\", \"US\"); E1BPSCUNEW_Segment.put(\"PHONE\", \"800-555-1212\"); E1BPSCUNEW_Segment.put(\"EMAIL\", \" [email protected] \"); E1BPSCUNEW_Segment.put(\"CUSTTYPE\", \"P\"); E1BPSCUNEW_Segment.put(\"DISCOUNT\", \"005\"); E1BPSCUNEW_Segment.put(\"LANGU\", \"E\");", "// Java document.setIDocType(\"FLCUSTOMER_CREATEFROMDATA01\"); document.setIDocTypeExtension(\"\"); document.setMessageType(\"FLCUSTOMER_CREATEFROMDATA\");", "<?xml version=\"1.0\" encoding=\"ASCII\"?> <idoc:Document iDocType=\"FLCUSTOMER_CREATEFROMDATA01\" iDocTypeExtension=\"\" messageType=\"FLCUSTOMER_CREATEFROMDATA\" ... > </idoc:Document>", "http://sap.fusesource.org/rfc/<Repository Name>/<RFC Name>", "<?xml version=\"1.0\" encoding=\"ASCII\"?> <BOOK_FLIGHT:Request xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> </BOOK_FLIGHT:Request>", "<?xml version=\"1.0\" encoding=\"ASCII\"?> <BOOK_FLIGHT:Response xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> </BOOK_FLIGHT:Response>", "<BOOK_FLIGHT:FLTINFO xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> </BOOK_FLIGHT:FLTINFO>", "<xs:schema targetNamespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> xmlns:xs=\"http://www.w3.org/2001/XMLSchema\"> <xs:complexType name=\"FLTINFO_STRUCTURE\"> </xs:complexType> </xs:schema>", "<BOOK_FLIGHT:CONNINFO xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> <row ... > ... </row> <row ... > ... </row> </BOOK_FLIGHT:CONNINFO>", "<xs:schema targetNamespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\"> <xs:complextType name=\"CONNECTION_INFO_STRUCTURE_TABLE\"> <xs:sequence> <xs:element name=\"row\" minOccures=\"0\" maxOccurs=\"unbounded\" type=\"CONNECTION_INFO_STRUCTURE\"/> <xs:sequence> </xs:sequence> </xs:complexType> <xs:complextType name=\"CONNECTION_INFO_STRUCTURE\"> </xs:complexType> </xs:schema>", "<?xml version=\"1.0\" encoding=\"ASCII\"?> <BOOK_FLIGHT:Request xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\" CUSTNAME=\"James Legrand\" PASSFORM=\"Mr\" PASSNAME=\"Travelin Joe\" PASSBIRTH=\"1990-03-17T00:00:00.000-0500\" FLIGHTDATE=\"2014-03-19T00:00:00.000-0400\" TRAVELAGENCYNUMBER=\"00000110\" DESTINATION_FROM=\"SFO\" DESTINATION_TO=\"FRA\"/>", "yyyy-MM-dd'T'HH:mm:ss.SSSZ", "DEPDATE=\"2014-03-19T00:00:00.000-0400\"", "DEPTIME=\"1970-01-01T16:00:00.000-0500\"", "http://sap.fusesource.org/idoc/ repositoryName / idocType / idocTypeExtension / systemRelease / applicationRelease", "http://sap.fusesource.org/idoc/MY_REPO/FLCUSTOMER_CREATEFROMDATA01///", "<convertBodyTo type=\"java.lang.String\"/>", "<convertBodyTo type=\"org.fusesource.camel.component.sap.model.idoc.Document\"/>", "<?xml version=\"1.0\" encoding=\"ASCII\"?> <idoc:Document xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:FLCUSTOMER_CREATEFROMDATA01---=\"http://sap.fusesource.org/idoc/XXX/FLCUSTOMER_CREATEFROMDATA01///\" xmlns:idoc=\"http://sap.fusesource.org/idoc\" creationDate=\"2015-01-28T12:39:13.980-0500\" creationTime=\"2015-01-28T12:39:13.980-0500\" iDocType=\"FLCUSTOMER_CREATEFROMDATA01\" iDocTypeExtension=\"\" messageType=\"FLCUSTOMER_CREATEFROMDATA\" recipientPartnerNumber=\"QUICKCLNT\" recipientPartnerType=\"LS\" senderPartnerNumber=\"QUICKSTART\" senderPartnerType=\"LS\"> <rootSegment xsi:type=\"FLCUSTOMER_CREATEFROMDATA01---:ROOT\" document=\"/\"> <segmentChildren parent=\"//@rootSegment\"> <E1SCU_CRE parent=\"//@rootSegment\" document=\"/\"> <segmentChildren parent=\"//@rootSegment/@segmentChildren/@E1SCU_CRE.0\"> <E1BPSCUNEW parent=\"//@rootSegment/@segmentChildren/@E1SCU_CRE.0\" document=\"/\" CUSTNAME=\"Fred Flintstone\" FORM=\"Mr.\" STREET=\"123 Rubble Lane\" POSTCODE=\"01234\" CITY=\"Bedrock\" COUNTR=\"US\" PHONE=\"800-555-1212\" EMAIL=\"[email protected]\" CUSTTYPE=\"P\" DISCOUNT=\"005\" LANGU=\"E\"/> </segmentChildren> </E1SCU_CRE> </segmentChildren> </rootSegment> </idoc:Document>", "from(\"direct:getFlightCustomerInfo\") .to(\"bean:createFlightCustomerGetListRequest\") .to(\"sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST\") .to(\"bean:returnFlightCustomerInfo\");", "<route> <from uri=\"direct:getFlightCustomerInfo\"/> <to uri=\"bean:createFlightCustomerGetListRequest\"/> <to uri=\"sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST\"/> <to uri=\"bean:returnFlightCustomerInfo\"/> </route>", "public void create(Exchange exchange) throws Exception { // Get SAP Endpoint to be called from context. SapSynchronousRfcDestinationEndpoint endpoint = exchange.getContext().getEndpoint(\"sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST\", SapSynchronousRfcDestinationEndpoint.class); // Retrieve bean from message containing Flight Customer name to // look up. BookFlightRequest bookFlightRequest = exchange.getIn().getBody(BookFlightRequest.class); // Create SAP Request object from target endpoint. Structure request = endpoint.getRequest(); // Add Customer Name to request if set if (bookFlightRequest.getCustomerName() != null && bookFlightRequest.getCustomerName().length() > 0) { request.put(\"CUSTOMER_NAME\", bookFlightRequest.getCustomerName()); } } else { throw new Exception(\"No Customer Name\"); } // Put request object into body of exchange message. exchange.getIn().setBody(request); }", "public void createFlightCustomerInfo(Exchange exchange) throws Exception { // Retrieve SAP response object from body of exchange message. Structure flightCustomerGetListResponse = exchange.getIn().getBody(Structure.class); if (flightCustomerGetListResponse == null) { throw new Exception(\"No Flight Customer Get List Response\"); } // Check BAPI return parameter for errors @SuppressWarnings(\"unchecked\") Table<Structure> bapiReturn = flightCustomerGetListResponse.get(\"RETURN\", Table.class); Structure bapiReturnEntry = bapiReturn.get(0); if (bapiReturnEntry.get(\"TYPE\", String.class) != \"S\") { String message = bapiReturnEntry.get(\"MESSAGE\", String.class); throw new Exception(\"BAPI call failed: \" + message); } // Get customer list table from response object. @SuppressWarnings(\"unchecked\") Table<? extends Structure> customerList = flightCustomerGetListResponse.get(\"CUSTOMER_LIST\", Table.class); if (customerList == null || customerList.size() == 0) { throw new Exception(\"No Customer Info.\"); } // Get Flight Customer data from first row of table. Structure customer = customerList.get(0); // Create bean to hold Flight Customer data. FlightCustomerInfo flightCustomerInfo = new FlightCustomerInfo(); // Get customer id from Flight Customer data and add to bean. String customerId = customer.get(\"CUSTOMERID\", String.class); if (customerId != null) { flightCustomerInfo.setCustomerNumber(customerId); } // Put bean into body of exchange message. exchange.getIn().setHeader(\"flightCustomerInfo\", flightCustomerInfo); }", "from(\"direct:createFlightTrip\") .to(\"bean:createFlightTripRequest\") .to(\"sap-srfc-destination:nplDest:BAPI_FLTRIP_CREATE?transacted=true\") .to(\"bean:returnFlightTripResponse\");", "<route> <from uri=\"direct:createFlightTrip\"/> <to uri=\"bean:createFlightTripRequest\"/> <to uri=\"sap-srfc-destination:nplDest:BAPI_FLTRIP_CREATE?transacted=true\"/> <to uri=\"bean:returnFlightTripResponse\"/> </route>", "DataFormat jaxb = new JaxbDataFormat(\"org.fusesource.sap.example.jaxb\"); from(\"sap-srfc-server:nplserver:BOOK_FLIGHT\") .unmarshal(jaxb) .multicast() .to(\"direct:getFlightConnectionInfo\", \"direct:getFlightCustomerInfo\", \"direct:getPassengerInfo\") .end() .to(\"direct:createFlightTrip\") .marshal(jaxb);", "<route> <from uri=\"sap-srfc-server:nplserver:BOOK_FLIGHT\"/> <unmarshal> <jaxb contextPath=\"org.fusesource.sap.example.jaxb\"/> </unmarshal> <multicast> <to uri=\"direct:getFlightConnectionInfo\"/> <to uri=\"direct:getFlightCustomerInfo\"/> <to uri=\"direct:getPassengerInfo\"/> </multicast> <to uri=\"direct:createFlightTrip\"/> <marshal> <jaxb contextPath=\"org.fusesource.sap.example.jaxb\"/> </marshal> </route>", "@XmlRootElement(name=\"Request\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class BookFlightRequest { @XmlAttribute(name=\"CUSTNAME\") private String customerName; @XmlAttribute(name=\"FLIGHTDATE\") @XmlJavaTypeAdapter(DateAdapter.class) private Date flightDate; @XmlAttribute(name=\"TRAVELAGENCYNUMBER\") private String travelAgencyNumber; @XmlAttribute(name=\"DESTINATION_FROM\") private String startAirportCode; @XmlAttribute(name=\"DESTINATION_TO\") private String endAirportCode; @XmlAttribute(name=\"PASSFORM\") private String passengerFormOfAddress; @XmlAttribute(name=\"PASSNAME\") private String passengerName; @XmlAttribute(name=\"PASSBIRTH\") @XmlJavaTypeAdapter(DateAdapter.class) private Date passengerDateOfBirth; @XmlAttribute(name=\"CLASS\") private String flightClass; }", "@XmlRootElement(name=\"Response\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class BookFlightResponse { @XmlAttribute(name=\"TRIPNUMBER\") private String tripNumber; @XmlAttribute(name=\"TICKET_PRICE\") private BigDecimal ticketPrice; @XmlAttribute(name=\"TICKET_TAX\") private BigDecimal ticketTax; @XmlAttribute(name=\"CURRENCY\") private String currency; @XmlAttribute(name=\"PASSFORM\") private String passengerFormOfAddress; @XmlAttribute(name=\"PASSNAME\") private String passengerName; @XmlAttribute(name=\"PASSBIRTH\") @XmlJavaTypeAdapter(DateAdapter.class) private Date passengerDateOfBirth; @XmlElement(name=\"FLTINFO\") private FlightInfo flightInfo; @XmlElement(name=\"CONNINFO\") private ConnectionInfoTable connectionInfo; }", "@XmlRootElement(name=\"FLTINFO\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class FlightInfo { @XmlAttribute(name=\"FLIGHTTIME\") private String flightTime; @XmlAttribute(name=\"CITYFROM\") private String cityFrom; @XmlAttribute(name=\"DEPDATE\") @XmlJavaTypeAdapter(DateAdapter.class) private Date departureDate; @XmlAttribute(name=\"DEPTIME\") @XmlJavaTypeAdapter(DateAdapter.class) private Date departureTime; @XmlAttribute(name=\"CITYTO\") private String cityTo; @XmlAttribute(name=\"ARRDATE\") @XmlJavaTypeAdapter(DateAdapter.class) private Date arrivalDate; @XmlAttribute(name=\"ARRTIME\") @XmlJavaTypeAdapter(DateAdapter.class) private Date arrivalTime; }", "@XmlRootElement(name=\"CONNINFO_TABLE\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class ConnectionInfoTable { @XmlElement(name=\"row\") List<ConnectionInfo> rows; }", "@XmlRootElement(name=\"CONNINFO\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class ConnectionInfo { @XmlAttribute(name=\"CONNID\") String connectionId; @XmlAttribute(name=\"AIRLINE\") String airline; @XmlAttribute(name=\"PLANETYPE\") String planeType; @XmlAttribute(name=\"CITYFROM\") String cityFrom; @XmlAttribute(name=\"DEPDATE\") @XmlJavaTypeAdapter(DateAdapter.class) Date departureDate; @XmlAttribute(name=\"DEPTIME\") @XmlJavaTypeAdapter(DateAdapter.class) Date departureTime; @XmlAttribute(name=\"CITYTO\") String cityTo; @XmlAttribute(name=\"ARRDATE\") @XmlJavaTypeAdapter(DateAdapter.class) Date arrivalDate; @XmlAttribute(name=\"ARRTIME\") @XmlJavaTypeAdapter(DateAdapter.class) Date arrivalTime; }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/SAP
Chapter 2. Configuration of a custom trigger for a dynamic recording
Chapter 2. Configuration of a custom trigger for a dynamic recording When you configure your target application to load the Cryostat agent, you can define one or more custom triggers that are then passed as arguments to the agent. For more information about configuring a target application to load the Cryostat agent, see Configuring Java applications . 2.1. Options for defining a custom trigger You can define a custom trigger in any of the following ways: Appending a custom trigger to the Cryostat agent's JAR file path The following example shows how to append a simple custom trigger to the Cryostat agent's JAR file path: The preceding example trigger instructs the agent to start a JFR recording if the ProcessCpuLoad metric has a value greater than 0.2 for a duration of more than 30 seconds: This example also instructs the agent to use the profile event template for the JFR recording. Using a JVM system property flag The following example shows how to specify a simple custom trigger by using a JVM system property flag: This example uses the same custom trigger criteria as the preceding example. Using an environment variable The following example shows how to specify a simple custom trigger by using an environment variable: This example uses the same custom trigger criteria as the preceding examples. 2.2. Common Expression Language You can use Common Expression Language (CEL) to define a custom trigger condition. CEL is a free-form expression syntax that provides great flexibility in defining rules and constraints for evaluating data. For example, you can use CEL to create relational statements for evaluating if any combination of MBean counter types have current values greater than, equal to, or less than specified configurable values. You can also include any combination of AND ( && ) or OR ( || ) logic statements between different MBean counter types that are part of the same trigger condition. For more information about CEL, see the CEL language specification . 2.3. General syntax rules for custom triggers Consider the following syntax guidelines for defining custom triggers: A custom trigger definition must consist of both an expression that defines the overall trigger condition and the name of an event template that is used for the JFR recording. The entire trigger expression must be enclosed in square brackets (for example, [ProcessCpuLoad > 0.2 ; TargetDuration < duration("30s")] ). For readability, you may use white space in a trigger expression as shown in the preceding example, but this is not a requirement. The name of the event template must be defined after the trigger expression and preceded by a tilde ( ~ ) character (for example, ~profile ). A trigger expression can consist of one or more constraints and a target duration. The set of constraints and target duration must be separated by a semicolon ( ; ) character. Each constraint must include: the name of an MBean counter; a relational operator such as > (greater than), = (equal to), < (less than), and so on; and a specified value. The type of relational operator and value that you can specify depends on the associated MBean counter type (for example, ProcessCpuLoad > 0.2 ). Constraints can be grouped together by using logical operators such as && (AND), || (OR), or ! (NOT) logic. For readability and clarity around the order of operations and operator precedence, grouped constraints may be enclosed in round brackets, but this is not a requirement. For example: The name of each MBean counter that is specified as part of a custom trigger must follow precise syntax rules in terms of spelling and capitalization. For a full list of the MBean metrics that you can specify, see MBean counter types . Only one target duration can be defined for a custom trigger. The target duration is applied to the entire trigger expression that is enclosed within the square brackets. A target duration can be expressed in terms of seconds, minutes, or hours. For example, 30s means 30 seconds, 5m means five minutes, 2h means two hours, and so on. A target duration is optional. If a target duration is not specified, triggering will occur immediately once the trigger conditions are met. Multiple custom trigger definitions can be specified together, each of which relates to a separate JFR recording. Different custom trigger definitions must be separated by a comma ( , ) character. For example:
[ "JAVA_OPTS=\"-javaagent:/deployments/app/cryostat-agent-shaded.jar=\\\"[ProcessCpuLoad > 0.2 ; TargetDuration > duration('30s')]~profile\\\"\"", "-Dcryostat.agent.smart-trigger.definitions=\"[ProcessCpuLoad > 0.2 ; TargetDuration > duration(\\\"30s\\\")]~profile\"", "- name: CRYOSTAT_AGENT_SMART_TRIGGER_DEFINITIONS value: \"[ProcessCpuLoad > 0.2 ; TargetDuration > duration(\\\"30s\\\")]~profile\"", "[( MetricA > value1 && MetricB < value2 ) || MetricC == ' stringvalue ' ; TargetDuration > duration(\"30s\")]", "[ProcessCpuLoad>0.2]~profile,[ThreadCount>30]~Continuous" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/enabling_dynamic_jfr_recordings_based_on_mbean_custom_triggers/con_configuration-of-custom-trigger-for-dynamic-recording_cryostat
Chapter 331. Stream Component
Chapter 331. Stream Component Available as of Camel version 1.3 The stream: component provides access to the System.in , System.out and System.err streams as well as allowing streaming of file and URL. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-stream</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 331.1. URI format stream:in[?options] stream:out[?options] stream:err[?options] stream:header[?options] In addition, the file and url endpoint URIs are supported: stream:file?fileName=/foo/bar.txt stream:url[?options] If the stream:header URI is specified, the stream header is used to find the stream to write to. This option is available only for stream producers (that is, it cannot appear in from() ). You can append query options to the URI in the following format, ?option=value&option=value&... 331.2. Options The Stream component has no options. The Stream endpoint is configured using URI syntax: with the following path and query parameters: 331.2.1. Path Parameters (1 parameters): Name Description Default Type kind Required Kind of stream to use such as System.in or System.out. String 331.2.2. Query Parameters (22 parameters): Name Description Default Type encoding (common) You can configure the encoding (is a charset name) to use text-based streams (for example, message body is a String object). If not provided, Camel uses the JVM default Charset. String fileName (common) When using the stream:file URI format, this option specifies the filename to stream to/from. String url (common) When using the stream:url URI format, this option specifies the URL to stream to/from. The input/output stream will be opened using the JDK URLConnection facility. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean fileWatcher (consumer) To use JVM file watcher to listen for file change events to support re-loading files that may be overwritten, somewhat like tail --retry false boolean groupLines (consumer) To group X number of lines in the consumer. For example to group 10 lines and therefore only spit out an Exchange with 10 lines, instead of 1 Exchange per line. int groupStrategy (consumer) Allows to use a custom GroupStrategy to control how to group lines. GroupStrategy initialPromptDelay (consumer) Initial delay in milliseconds before showing the message prompt. This delay occurs only once. Can be used during system startup to avoid message prompts being written while other logging is done to the system out. 2000 long promptDelay (consumer) Optional delay in milliseconds before showing the message prompt. long promptMessage (consumer) Message prompt to use when reading from stream:in; for example, you could set this to Enter a command: String retry (consumer) Will retry opening the stream if it's overwritten, somewhat like tail --retry If reading from files then you should also enable the fileWatcher option, to make it work reliable. false boolean scanStream (consumer) To be used for continuously reading a stream such as the unix tail command. false boolean scanStreamDelay (consumer) Delay in milliseconds between read attempts when using scanStream. long exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern autoCloseCount (producer) Number of messages to process before closing stream on Producer side. Never close stream by default (only when Producer is stopped). If more messages are sent, the stream is reopened for another autoCloseCount batch. int closeOnDone (producer) This option is used in combination with Splitter and streaming to the same file. The idea is to keep the stream open and only close when the Splitter is done, to improve performance. Mind this requires that you only stream to the same file, and not 2 or more files. false boolean delay (producer) Initial delay in milliseconds before producing the stream. long connectTimeout (advanced) Sets a specified timeout value, in milliseconds, to be used when opening a communications link to the resource referenced by this URLConnection. If the timeout expires before the connection can be established, a java.net.SocketTimeoutException is raised. A timeout of zero is interpreted as an infinite timeout. int httpHeaders (advanced) Optional http headers to use in request when using HTTP URL. Map readTimeout (advanced) Sets the read timeout to a specified timeout, in milliseconds. A non-zero value specifies the timeout when reading from Input stream when a connection is established to a resource. If the timeout expires before there is data available for read, a java.net.SocketTimeoutException is raised. A timeout of zero is interpreted as an infinite timeout. int synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 331.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.stream.enabled Enable stream component true Boolean camel.component.stream.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 331.4. Message content The stream: component supports either String or byte[] for writing to streams. Just add either String or byte[] content to the message.in.body . Messages sent to the stream: producer in binary mode are not followed by the newline character (as opposed to the String messages). Message with null body will not be appended to the output stream. The special stream:header URI is used for custom output streams. Just add a java.io.OutputStream object to message.in.header in the key header . See samples for an example. 331.5. Samples In the following sample we route messages from the direct:in endpoint to the System.out stream: // Route messages to the standard output. from("direct:in").to("stream:out"); // Send String payload to the standard output. // Message will be followed by the newline. template.sendBody("direct:in", "Hello Text World"); // Send byte[] payload to the standard output. // No newline will be added after the message. template.sendBody("direct:in", "Hello Bytes World".getBytes()); The following sample demonstrates how the header type can be used to determine which stream to use. In the sample we use our own output stream, MyOutputStream . The following sample demonstrates how to continuously read a file stream (analogous to the UNIX tail command): from("stream:file?fileName=/server/logs/server.log&scanStream=true&scanStreamDelay=1000") .to("bean:logService?method=parseLogLine"); One gotcha with scanStream (pre Camel 2.7) or scanStream + retry is the file will be re-opened and scanned with each iteration of scanStreamDelay. Until NIO2 is available we cannot reliably detect when a file is deleted/recreated. If you want to re-load the file if it rollover/rewritten then you should also turn on the fileWatcher and retry options. from("stream:file?fileName=/server/logs/server.log&scanStream=true&scanStreamDelay=1000&retry=true&fileWatcher=true") .to("bean:logService?method=parseLogLine");
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-stream</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "stream:in[?options] stream:out[?options] stream:err[?options] stream:header[?options]", "stream:file?fileName=/foo/bar.txt stream:url[?options]", "stream:kind", "// Route messages to the standard output. from(\"direct:in\").to(\"stream:out\"); // Send String payload to the standard output. // Message will be followed by the newline. template.sendBody(\"direct:in\", \"Hello Text World\"); // Send byte[] payload to the standard output. // No newline will be added after the message. template.sendBody(\"direct:in\", \"Hello Bytes World\".getBytes());", "from(\"stream:file?fileName=/server/logs/server.log&scanStream=true&scanStreamDelay=1000\") .to(\"bean:logService?method=parseLogLine\");", "from(\"stream:file?fileName=/server/logs/server.log&scanStream=true&scanStreamDelay=1000&retry=true&fileWatcher=true\") .to(\"bean:logService?method=parseLogLine\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/stream-component
Chapter 6. Updating the OpenShift Data Foundation external secret
Chapter 6. Updating the OpenShift Data Foundation external secret Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation. Note Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.17.x to 4.17.y. Prerequisites Update the OpenShift Container Platform cluster to the latest stable release of 4.17.z, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. Red Hat Ceph Storage must have a Ceph dashboard installed and configured. Procedure Download the ceph-external-cluster-details-exporter.py python script that matches your OpenShift Data Foundation version. Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. The updated permissions for the user are set as: Run the previously downloaded python script and save the JSON output that gets generated from the external Red Hat Ceph Storage cluster. Run the previously downloaded python script: Note Make sure to use all the flags that you used in the original deployment including any optional argument that you have used. Ensure that all the parameters, including the optional arguments, except for monitoring-endpoint and monitoring-endpoint-port , are the same as that you used during the original deployment of OpenShift Data Foundation in external mode. --rbd-data-pool-name Is a mandatory parameter used for providing block storage in OpenShift Data Foundation. --rgw-endpoint Is optional. Provide this parameter if object storage is to be provisioned through Ceph RADOS Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> . --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. Additional flags: rgw-pool-prefix (Optional) The prefix of the RGW pools. If not specified, the default prefix is default . rgw-tls-cert-path (Optional) The file path of the RADOS Gateway endpoint TLS certificate. rgw-skip-tls (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED). ceph-conf (Optional) The name of the Ceph configuration file. cluster-name (Optional) The Ceph cluster name. output (Optional) The file where the output is required to be stored. cephfs-metadata-pool-name (Optional) The name of the CephFS metadata pool. cephfs-data-pool-name (Optional) The name of the CephFS data pool. cephfs-filesystem-name (Optional) The name of the CephFS filesystem. rbd-metadata-ec-pool-name (Optional) The name of the erasure coded RBD metadata pool. dry-run (Optional) This parameter helps to print the executed commands without running them. Save the JSON output generated after running the script in the step. Example output: Upload the generated JSON file. Log in to the OpenShift Web Console. Click Workloads Secrets . Set project to openshift-storage . Click rook-ceph-external-cluster-details . Click Actions (...) Edit Secret . Click Browse and upload the JSON file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. On the Overview Block and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. If verification steps fail, contact Red Hat Support .
[ "oc get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}'| base64 --decode > ceph-external-cluster-details-exporter.py", "python3 ceph-external-cluster-details-exporter.py --upgrade", "client.csi-cephfs-node key: AQCYz0piYgu/IRAAipji4C8+Lfymu9vOrox3zQ== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs = client.csi-cephfs-provisioner key: AQCYz0piDUMSIxAARuGUyhLXFO9u4zQeRG65pQ== caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQCYz0pi88IKHhAAvzRN4fD90nkb082ldrTaHA== caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd client.csi-rbd-provisioner key: AQCYz0pi6W8IIBAAgRJfrAW7kZfucNdqJqS9dQ== caps: [mgr] allow rw caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> --monitoring-endpoint <ceph mgr prometheus exporter endpoint> --monitoring-endpoint-port <ceph mgr prometheus exporter port> --rgw-endpoint <rgw endpoint> --run-as-user <ocs_client_name> [optional arguments]", "[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}}]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/updating_openshift_data_foundation/updating-the-openshift-data-foundation-external-secret_rhodf
Chapter 3. Configuring an RHDH instance with a TLS connection in Kubernetes
Chapter 3. Configuring an RHDH instance with a TLS connection in Kubernetes You can configure an RHDH instance with a Transport Layer Security (TLS) connection in a Kubernetes cluster, such as an Azure Red Hat OpenShift (ARO) cluster, any cluster from a supported cloud provider, or your own cluster with proper configuration. However, You must use a public Certificate Authority (CA)-signed certificate to configure your Kubernetes cluster. Prerequisites You have set up an Azure Red Hat OpenShift (ARO) cluster with a public CA-signed certificate. For more information about obtaining CA certificates, refer to your vendor documentation. You have created a namespace and setup a service account with proper read permissions on resources. Example: Kubernetes manifest for role-based access control apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - '*' resources: - pods - configmaps - services - deployments - replicasets - horizontalpodautoscalers - ingresses - statefulsets - limitranges - resourcequotas - daemonsets verbs: - get - list - watch #... You have obtained the secret and the service CA certificate associated with your service account. You have created some resources and added annotations to them so they can be discovered by the Kubernetes plugin. You can apply these Kubernetes annotations: backstage.io/kubernetes-id to label components backstage.io/kubernetes-namespace to label namespaces Procedure Enable the Kubernetes plugins in the dynamic-plugins-rhdh.yaml file: kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic disabled: false 1 - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes disabled: false 2 # ... 1 Set the value to false to enable the backstage-plugin-kubernetes-backend-dynamic plugin. 2 Set the value to false to enable the backstage-plugin-kubernetes plugin. Note The backstage-plugin-kubernetes plugin is currently in Technology Preview . As an alternative, you can use the ./dynamic-plugins/dist/backstage-plugin-topology-dynamic plugin, which is Generally Available (GA). Set the kubernetes cluster details and configure the catalog sync options in the app-config-rhdh.yaml file: kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | # ... catalog: rules: - allow: [Component, System, API, Resource, Location] providers: kubernetes: openshift: cluster: openshift processor: namespaceOverride: default defaultOwner: guests schedule: frequency: seconds: 30 timeout: seconds: 5 kubernetes: serviceLocatorMethod: type: 'multiTenant' clusterLocatorMethods: - type: 'config' clusters: - url: <target-cluster-api-server-url> 1 name: openshift authProvider: 'serviceAccount' skipTLSVerify: false 2 skipMetricsLookup: true dashboardUrl: <target-cluster-console-url> 3 dashboardApp: openshift serviceAccountToken: USD{K8S_SERVICE_ACCOUNT_TOKEN} 4 caData: USD{K8S_CONFIG_CA_DATA} 5 # ... 1 The base URL to the Kubernetes control plane. You can run the kubectl cluster-info command to get the base URL. 2 Set the value of this parameter to false to enable the verification of the TLS certificate. 3 Optional: The link to the Kubernetes dashboard managing the ARO cluster. 4 Optional: Pass the service account token using a K8S_SERVICE_ACCOUNT_TOKEN environment variable that you can define in your secrets-rhdh secret. 5 Pass the CA data using a K8S_CONFIG_CA_DATA environment variable that you can define in your secrets-rhdh secret. Save the configuration changes. Verification Run the RHDH application to import your catalog: kubectl -n rhdh-operator get pods -w Verify that the pod log shows no errors for your configuration. Go to Catalog and check the component page in the Developer Hub instance to verify the cluster connection and the presence of your created resources. Note If you encounter connection errors, such as certificate issues or permissions, check the message box in the component page or view the logs of the pod.
[ "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - '*' resources: - pods - configmaps - services - deployments - replicasets - horizontalpodautoscalers - ingresses - statefulsets - limitranges - resourcequotas - daemonsets verbs: - get - list - watch #", "kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic disabled: false 1 - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes disabled: false 2 #", "kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | # catalog: rules: - allow: [Component, System, API, Resource, Location] providers: kubernetes: openshift: cluster: openshift processor: namespaceOverride: default defaultOwner: guests schedule: frequency: seconds: 30 timeout: seconds: 5 kubernetes: serviceLocatorMethod: type: 'multiTenant' clusterLocatorMethods: - type: 'config' clusters: - url: <target-cluster-api-server-url> 1 name: openshift authProvider: 'serviceAccount' skipTLSVerify: false 2 skipMetricsLookup: true dashboardUrl: <target-cluster-console-url> 3 dashboardApp: openshift serviceAccountToken: USD{K8S_SERVICE_ACCOUNT_TOKEN} 4 caData: USD{K8S_CONFIG_CA_DATA} 5 #", "-n rhdh-operator get pods -w" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/administration_guide_for_red_hat_developer_hub/proc-configuring-an-rhdh-instance-with-tls-in-kubernetes_admin-rhdh
20.28. Retrieving Guest Virtual Machine Information
20.28. Retrieving Guest Virtual Machine Information 20.28.1. Getting the Domain ID of a Guest Virtual Machine The virsh domid command returns the guest virtual machine's ID. Note that this changes each time the guest starts or restarts. This command requires either the name of the virtual machine or the virtual machine's UUID. Example 20.67. How to retrieve the domain ID for a guest virtual machine The following example retrieves the domain ID of a guest virtual machine named guest1 : Note, domid returns - for guest virtual machines that are in shut off state. To confirm that the virtual machine is shutoff, you can run the virsh list --all command. 20.28.2. Getting the Domain Name of a Guest Virtual Machine The virsh domname command returns the name of the guest virtual machine given its ID or UUID. Note that the ID changes each time the guest starts. Example 20.68. How to retrieve a virtual machine's ID The following example retrieves the name for the guest virtual machine whose ID is 8 : 20.28.3. Getting the UUID of a Guest Virtual Machine The virsh domuuid command returns the UUID or the Universally Unique Identifier for a given guest virtual machine or ID. Example 20.69. How to display the UUID for a guest virtual machine The following example retrieves the UUID for the guest virtual machine named guest1 : 20.28.4. Displaying Guest Virtual Machine Information The virsh dominfo command displays information on that guest virtual machine given a virtual machine's name, ID, or UUID. Note that the ID changes each time the virtual machine starts. Example 20.70. How to display guest virtual machine general details The following example displays the general details about the guest virtual machine named guest1 :
[ "virsh domid guest1 8", "virsh domname 8 guest1", "virsh domuuid guest1 r5b2-mySQL01 4a4c59a7-ee3f-c781-96e4-288f2862f011", "virsh dominfo guest1 Id: 8 Name: guest1 UUID: 90e0d63e-d5c1-4735-91f6-20a32ca22c48 OS Type: hvm State: running CPU(s): 1 CPU time: 32.6s Max memory: 1048576 KiB Used memory: 1048576 KiB Persistent: yes Autostart: disable Managed save: no Security model: selinux Security DOI: 0 Security label: system_u:system_r:svirt_t:s0:c552,c818 (enforcing)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-retrieving_guest_virtual_machine_information
Chapter 1. JBoss EAP XP upgrades
Chapter 1. JBoss EAP XP upgrades 1.1. Upgrades and migrations Use the steps outlined in the JBoss EAP XP upgrade and migration guide to prepare, upgrade, and migrate your JBoss EAP XP 2.0.x product to the JBoss EAP XP 3.0.0 product. JBoss EAP XP 3.0.0 is compatible with only JBoss EAP 7.4. If you operate servers on JBoss EAP 7.3 and you want to apply the JBoss EAP XP 3.0.0 patch on it, you must first upgrade your JBoss EAP 7.3 instance to JBoss EAP 7.4. The guide references tools that you can use for the upgrading and migration process. These tools are as follows: Migration Toolkit for Applications (MTA) JBoss Server Migration Tool After you successfully upgrade and migrate JBoss EAP XP 2.0.x release to JBoss EAP XP 3.0.0, you can begin to implement any applications migrations for your JBoss EAP 7.4 instance. Additional resources For information about archiving applications that you plan to migrate to JBoss EAP XP 3.0.0, see Back Up Important Data and Review Server State in the Migration Guide . 1.2. Preparation for upgrade and migration After you upgrade the JBoss EAP Expansion Pack, you might have to update application code. For JBoss EAP XP 3.0.0, some backward compatibility might exist for JBoss EAP XP 2.0.x applications. However, if your application uses features that were deprecated or functionality that was removed from JBoss EAP XP 2.0.x, you might need to make changes to your application code. Please review the following new items before you begin the migration process: JBoss EAP XP features added in the JBoss EAP XP 3.0.0 release. MicroProfile capabilities added in the JBoss EAP XP 3.0.0. Enhancements to existing MicroProfile capabilities. Capabilities and features that are deprecated in the JBoss EAP XP 3.0.0. Capabilities and features that have been removed from JBoss EAP XP 3.0.0. Tools that you can use to migrate from one EAP XP release to another release. After you have reviewed the listed items, analyze your environment and plan for the upgrade process and migration process. Ensure you back up any applications that you plan to migrate to JBoss EAP XP 3.0.0. You can now upgrade your current JBoss EAP XP 2.0.x release to JBoss EAP XP 3.0.0. You can implement any applications migrations after the upgrade process Additional resources For information about archiving applications that you plan to migrate to JBoss EAP XP 3.0.0, see Back Up Important Data and Review Server State in the Migration Guide . 1.3. New JBoss EAP XP capabilities The JBoss EAP XP 3.0.0 includes new features that enhance the use of Red Hat implementation of the MicroProfile specification for JBoss EAP applications. Note The MicroProfile Reactive Messaging subsystem supports Red Hat AMQ Streams. This feature implements the MicroProfile Reactive Messaging 1.0 API and Red Hat provides the feature as a technology preview for JBoss EAP XP 3.0.0. Red Hat tested Red Hat AMQ Streams 2021.Q2 on JBoss EAP. However, check the Red Hat JBoss Enterprise Application Platform supported configurations page for information about the latest Red Hat AMQ Streams version that has been tested on JBoss EAP XP 3.0.0. JBoss EAP XP 3.0.0 includes the following new features in its release: Run CLI scripts after you have started your application. Use the --cli-script=<path to CLI script> argument to update the server configuration of a bootable JAR file at runtime. Use the MicroProfile Reactive Messaging 1.0 API to send and receive messages between microservices. Use the MicroProfile Reactive Messaging 1.0 API to write and configure a user application, so the application can send, receive, and process event streams efficiently and asynchronously. Enable MicroProfile Reactive Messaging functionality in your server configuration, as MicroProfile Reactive Messaging only comes pre-installed on your server. View the MicroProfile Reactive Messaging with MicroProfile Reactive Messaging with Kafka quickstart to learn how you can complete the following tasks on your server: Enable the MicroProfile Reactive Messaging subsystem. Run and test applications by using MicroProfile Reactive Messaging to send data and receive data from Red Hat AMQ Streams. Additional resources For information about Red Hat AMQ Streams, see Overview of AMQ Streams in the Using AMQ Streams on OpenShift guide. For information about Technology Preview features. see Technology Preview Features Support Scope on the Red Hat Customer Portal. For information on the Red Hat AMQ Streams versions, see Red Hat AMQ on the Product Documentation page. For more information about the MicroProfile Reactive Messaging with Kafka quickstart, see jboss-eap-quickstarts and select the listed MicroProfile Reactive Messaging with Kafka quickstart. 1.4. Enhancements to MicroProfile capabilities The JBoss EAP XP 3.0.0 release includes support for the following MicroProfile 4.0 components: MicroProfile Config MicroProfile Fault Tolerance MicroProfile Health MicroProfile JWT MicroProfile Metrics MicroProfile OpenAPI MicroProfile OpenTracing MicroProfile REST Client Additional resources For more information about MicroProfile 4.0 and its specifications, see MicroProfile 4.0 on GitHub . For more information about MicroProfile 4.0 specification components, see About JBoss EAP XP in the Using JBoss EAP XP 3.0.0 guide. 1.5. Deprecated and unsupported MicroProfile capabilities Before you migrate your application to JBoss EAP XP 3.0.0 be aware that some features that were available in JBoss EAP XP 2.0.x might be deprecated or no longer supported. Red Hat removed support for some technologies due to the high maintenance cost, low community interest, and much better alternative solutions. Ensure that you review the Red Hat JBoss EAP XP 3.0.0 Release Notes guide and the 7.4.0 Release Notes guide for any unsupported and deprecated features. Additional resources For more information about any unsupported and deprecated features for JBoss EAP XP 3.0.0, see the Using JBoss EAP XP 3.0.0 guide. For more information about any unsupported and deprecated features for JBoss EAP 7.4, see the 7.4.0 Release Notes guide.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/jboss_eap_xp_upgrade_and_migration_guide/expansion-pack-migration-guide_default
Chapter 3. Setting up and configuring the registry
Chapter 3. Setting up and configuring the registry 3.1. Configuring the registry for AWS user-provisioned infrastructure 3.1.1. Configuring a secret for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. For S3 on AWS storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry 3.1.2. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 3.1.3. Image Registry Operator configuration parameters for AWS S3 The following configuration parameters are available for AWS S3 registry storage. ImageRegistryConfigStorageS3 holds the information to configure the registry to use the AWS S3 service for back-end storage. See the S3 storage driver documentation for more information. Parameter Description bucket Bucket is the bucket name in which you want to store the registry's data. It is optional and is generated if not provided. region Region is the AWS region in which your bucket exists. It is optional and is set based on the installed AWS Region. regionEndpoint RegionEndpoint is the endpoint for S3 compatible storage services. It is optional and defaults based on the Region that is provided. virtualHostedStyle VirtualHostedStyle enables using S3 virtual hosted style bucket paths with a custom RegionEndpoint. It is optional and defaults to false. Set this parameter to deploy OpenShift Container Platform to hidden regions. encrypt Encrypt specifies whether or not the registry stores the image in encrypted format. It is optional and defaults to false. keyID KeyID is the KMS key ID to use for encryption. It is optional. Encrypt must be true, or this parameter is ignored. ImageRegistryConfigStorageS3CloudFront CloudFront configures Amazon Cloudfront as the storage middleware in a registry. It is optional. Note When the value of the regionEndpoint parameter is configured to a URL of a Rados Gateway, an explicit port must not be specified. For example: regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local 3.2. Configuring the registry for GCP user-provisioned infrastructure 3.2.1. Configuring a secret for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. For GCS on GCP storage, the secret is expected to contain one key whose value is the contents of a credentials file provided by GCP: REGISTRY_STORAGE_GCS_KEYFILE Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry 3.2.2. Registry storage for GCP with user-provisioned infrastructure You must set up the storage medium manually and configure the settings in the registry custom resource (CR). Prerequisites A cluster on GCP with user-provisioned infrastructure. To configure registry storage for GCP, you need to provide Registry Operator cloud credentials. For GCS on GCP storage, the secret is expected to contain one key whose value is the contents of a credentials file provided by GCP: REGISTRY_STORAGE_GCS_KEYFILE 3.2.3. Image Registry Operator configuration parameters for GCP GCS Procedure The following configuration parameters are available for GCP GCS registry storage. Parameter Description bucket Bucket is the bucket name in which you want to store the registry's data. It is optional and is generated if not provided. region Region is the GCS location in which your bucket exists. It is optional and is set based on the installed GCS Region. projectID ProjectID is the Project ID of the GCP project that this bucket should be associated with. It is optional. keyID KeyID is the KMS key ID to use for encryption. It is optional because buckets are encrypted by default on GCP. This allows for the use of a custom encryption key. 3.3. Configuring the registry for Azure user-provisioned infrastructure 3.3.1. Configuring a secret for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. For Azure registry storage, the secret is expected to contain one key whose value is the contents of a credentials file provided by Azure: REGISTRY_STORAGE_AZURE_ACCOUNTKEY Procedure Create an OpenShift Container Platform secret that contains the required key. USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry 3.3.2. Configuring registry storage for Azure During installation, your cloud credentials are sufficient to create Azure Blob Storage, and the Registry Operator automatically configures storage. Prerequisites A cluster on Azure with user-provisioned infrastructure. To configure registry storage for Azure, provide Registry Operator cloud credentials. For Azure storage the secret is expected to contain one key: REGISTRY_STORAGE_AZURE_ACCOUNTKEY Procedure Create an Azure storage container . Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: azure: accountName: <storage-account-name> container: <container-name> 3.3.3. Configuring registry storage for Azure Government During installation, your cloud credentials are sufficient to create Azure Blob Storage, and the Registry Operator automatically configures storage. Prerequisites A cluster on Azure with user-provisioned infrastructure in a government region. To configure registry storage for Azure, provide Registry Operator cloud credentials. For Azure storage, the secret is expected to contain one key: REGISTRY_STORAGE_AZURE_ACCOUNTKEY Procedure Create an Azure storage container . Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1 1 cloudName is the name of the Azure cloud environment, which can be used to configure the Azure SDK with the appropriate Azure API endpoints. Defaults to AzurePublicCloud . You can also set cloudName to AzureUSGovernmentCloud , AzureChinaCloud , or AzureGermanCloud with sufficient credentials. 3.4. Configuring the registry for bare metal 3.4.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 3.4.2. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 3.4.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.4.4. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 3.4.5. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.4.6. Configuring block registry storage To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Edit the registry configuration so that it references the correct PVC. 3.4.7. Additional resources For more details about configuring registry storage for bare metal, see Recommended configurable storage technology . 3.5. Configuring the registry for vSphere 3.5.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 3.5.2. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 3.5.2.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.5.3. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 3.5.4. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.5.5. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 3.5.6. Additional resources For more details about configuring registry storage for vSphere, see Recommended configurable storage technology .
[ "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local", "oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: azure: accountName: <storage-account-name> container: <container-name>", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/registry/setting-up-and-configuring-the-registry
Chapter 5. Fixed issues
Chapter 5. Fixed issues Cryostat releases can include fixes for issues that were identified in earlier versions. Review the following notes for details on each issue and its resolution. Issues fixed in Cryostat 3.0.1 The following issues have been resolved in the Cryostat 3.0.1 release: Failure to create or update a Cryostat custom resource due to webhook TLS errors Before Cryostat 3.0.1, if you installed other Operators that were using webhooks in the same namespace as the Cryostat Operator, attempts to create or update Cryostat custom resources could fail. This behavior resulted in the following type of error message: This error occurred because the label selectors on the Cryostat Operator's webhook service were not specific enough and could match other operators. Cryostat 3.0.1 resolves this issue by making the label selector for the Cryostat Operator's webhook service more specific to match the Cryostat Operator pods only. Inability to upload JMC ByteCode Agent instrumentation templates Before Cryostat 3.0.1, Cryostat could not accept bytecode probe definition templates for the JDK Mission Control (JMC) ByteCode Agent integration feature. In this situation, the server received the uploaded template XML file, but the server failed to locate the validation schema and refused the request. Cryostat 3.0.1 resolves this issue by correctly relocating the template schema file, which enables the server to validate and accept uploaded XML files. Parsing failures when uploading archived recordings with labels file When uploading or re-uploading a JDK Flight Recorder (JFR) file from your workstation to the Cryostat storage, you can also select a JSON file containing metadata and label information associated with the uploaded JFR recording. Before Cryostat 3.0.1, the selected JSON file was incorrectly parsed and the label information was not applied to the JFR recording. Cryostat 3.0.1 resolves this issue by correcting the parsing procedure of the supplied metadata file, which ensures that the labels are correctly uploaded to the server and associated with the uploaded JFR recording. Topology view fails to filter target JVMs by label or annotation The Topology view of the Cryostat web console displays a graph or list view of the discovered target JVM applications for the user. The Topology view includes a drop-down menu that you can use to filter these targets based on various properties, including any OpenShift labels or annotations that might be present. Before Cryostat 3.0.1, the Cryostat server processed these labels and annotations incorrectly. In this situation, the server substituted the labels and annotations with [object Object] text, which prevented any filtering based on these attributes. From Cryostat 3.0.1 onward, the Topology view correctly displays any labels and annotations as key-value pairs that you can use to filter the list of target JVMs. Match expressions cannot use the target.agent property API endpoints such as /api/v3/discovery or /api/v3/targets list target JVM objects with various properties, including an agent property. The agent property reflects whether the target JVM uses a JMX connection or a Cryostat agent HTTP connection. Before Cryostat 3.0.1, you could not reference or select the agent property for filtering purposes in the following situations: When creating match expressions for stored credentials or automated rules When graphically filtering target JVMs in the Topology view of the Cryostat web console From Cryostat 3.0.1 onward, you can reference the agent property in match expressions. You can also select the agent property as a filter in the Topology view to show only target JVMs that use the Cryostat agent.
[ "Error \"failed calling webhook \"vcryostat.kb.io\": failed to call webhook: Post \"https://cryostat-operator-controller-manager-service.openshift-operators.svc:443/validate-operator-cryostat-io-v1beta2-cryostat?timeout=10s\": tls: failed to verify certificate: x509: certificate is valid for infinispan-operator-controller-manager-service.openshift-operators, infinispan-operator-controller-manager-service.openshift-operators.svc, not cryostat-operator-controller-manager-service.openshift-operators.svc\" for field \"undefined\"." ]
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/release_notes_for_the_red_hat_build_of_cryostat_3.0/cryostat-3-0-fixed-issues_cryostat
Chapter 18. Using the KDC Proxy in IdM
Chapter 18. Using the KDC Proxy in IdM Some administrators might choose to make the default Kerberos ports inaccessible in their deployment. To allow users, hosts, and services to obtain Kerberos credentials, you can use the HTTPS service as a proxy that communicates with Kerberos via the HTTPS port 443. In Identity Management (IdM), the Kerberos Key Distribution Center Proxy (KKDCP) provides this functionality. On an IdM server, KKDCP is enabled by default and available at https:// server.idm.example.com /KdcProxy . On an IdM client, you must change its Kerberos configuration to access the KKDCP. 18.1. Configuring an IdM client to use KKDCP As an Identity Management (IdM) system administrator, you can configure an IdM client to use the Kerberos Key Distribution Center Proxy (KKDCP) on an IdM server. This is useful if the default Kerberos ports are not accessible on the IdM server and the HTTPS port 443 is the only way of accessing the Kerberos service. Prerequisites You have root access to the IdM client. Procedure Open the /etc/krb5.conf file for editing. In the [realms] section, enter the URL of the KKDCP for the kdc , admin_server , and kpasswd_server options: For redundancy, you can add the parameters kdc , admin_server , and kpasswd_server multiple times to indicate different KKDCP servers. Restart the sssd service to make the changes take effect: 18.2. Verifying that KKDCP is enabled on an IdM server On an Identity Management (IdM) server, the Kerberos Key Distribution Center Proxy (KKDCP) is automatically enabled each time the Apache web server starts if the attribute and value pair ipaConfigString=kdcProxyEnabled exists in the directory. In this situation, the symbolic link /etc/httpd/conf.d/ipa-kdc-proxy.conf is created. You can verify if the KKDCP is enabled on the IdM server, even as an unprivileged user. Procedure Check that the symbolic link exists: The output confirms that KKDCP is enabled. 18.3. Disabling KKDCP on an IdM server As an Identity Management (IdM) system administrator, you can disable the Kerberos Key Distribution Center Proxy (KKDCP) on an IdM server. Prerequisites You have root access to the IdM server. Procedure Remove the ipaConfigString=kdcProxyEnabled attribute and value pair from the directory: Restart the httpd service: KKDCP is now disabled on the current IdM server. Verification Verify that the symbolic link does not exist: 18.4. Re-enabling KKDCP on an IdM server On an IdM server, the Kerberos Key Distribution Center Proxy (KKDCP) is enabled by default and available at https:// server.idm.example.com /KdcProxy . If KKDCP has been disabled on a server, you can re-enable it. Prerequisites You have root access to the IdM server. Procedure Add the ipaConfigString=kdcProxyEnabled attribute and value pair to the directory: Restart the httpd service: KKDCP is now enabled on the current IdM server. Verification Verify that the symbolic link exists: 18.5. Configuring the KKDCP server I With the following configuration, you can enable TCP to be used as the transport protocol between the IdM KKDCP and the Active Directory (AD) realm, where multiple Kerberos servers are used. Prerequisites You have root access. Procedure Set the use_dns parameter in the [global] section of the /etc/ipa/kdcproxy/kdcproxy.conf file to false . Put the proxied realm information into the /etc/ipa/kdcproxy/kdcproxy.conf file. For example, for the [AD. EXAMPLE.COM ] realm with proxy list the realm configuration parameters as follows: Important The realm configuration parameters must list multiple servers separated by a space, as opposed to /etc/krb5.conf and kdc.conf , in which certain options may be specified multiple times. Restart Identity Management (IdM) services: Additional resources Configure IPA server as a KDC Proxy for AD Kerberos communication (Red Hat Knowledgebase) 18.6. Configuring the KKDCP server II The following server configuration relies on the DNS service records to find Active Directory (AD) servers to communicate with. Prerequisites You have root access. Procedure In the /etc/ipa/kdcproxy/kdcproxy.conf file, the [global] section, set the use_dns parameter to true . The configs parameter allows you to load other configuration modules. In this case, the configuration is read from the MIT libkrb5 library. Optional: In case you do not want to use DNS service records, add explicit AD servers to the [realms] section of the /etc/krb5.conf file. If the realm with proxy is, for example, AD. EXAMPLE.COM , you add: Restart Identity Management (IdM) services: Additional resources Configure IPA server as a KDC Proxy for AD Kerberos communication (Red Hat Knowledgebase)
[ "[realms] EXAMPLE.COM = { kdc = https://kdc.example.com/KdcProxy admin_server = https://kdc.example.com/KdcProxy kpasswd_server = https://kdc.example.com/KdcProxy default_domain = example.com }", "~]# systemctl restart sssd", "ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Jun 21 2020 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf", "ipa-ldap-updater /usr/share/ipa/kdcproxy-disable.uldif Update complete The ipa-ldap-updater command was successful", "systemctl restart httpd.service", "ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf ls: cannot access '/etc/httpd/conf.d/ipa-kdc-proxy.conf': No such file or directory", "ipa-ldap-updater /usr/share/ipa/kdcproxy-enable.uldif Update complete The ipa-ldap-updater command was successful", "systemctl restart httpd.service", "ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Jun 21 2020 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf", "[global] use_dns = false", "[AD. EXAMPLE.COM ] kerberos = kerberos+tcp://1.2.3.4:88 kerberos+tcp://5.6.7.8:88 kpasswd = kpasswd+tcp://1.2.3.4:464 kpasswd+tcp://5.6.7.8:464", "ipactl restart", "[global] configs = mit use_dns = true", "[realms] AD. EXAMPLE.COM = { kdc = ad-server.ad.example.com kpasswd_server = ad-server.ad.example.com }", "ipactl restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/using-the-kdc-proxy-in-idm_managing-users-groups-hosts
Chapter 1. Preparing to install on GCP
Chapter 1. Preparing to install on GCP 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on GCP Before installing OpenShift Container Platform on Google Cloud Platform (GCP), you must create a service account and configure a GCP project. See Configuring a GCP project for details about creating a project, enabling API services, configuring DNS, GCP account limits, and supported GCP regions. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating long-term credentials for GCP for other options. 1.3. Choosing a method to install OpenShift Container Platform on GCP You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on GCP : You can install OpenShift Container Platform on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on GCP : You can install a customized cluster on GCP infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on GCP with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on GCP in a restricted network : You can install OpenShift Container Platform on GCP on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. While you can install OpenShift Container Platform by using the mirrored content, your cluster still requires internet access to use the GCP APIs. Installing a cluster into an existing Virtual Private Cloud : You can install OpenShift Container Platform on an existing GCP Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits on creating new accounts or infrastructure. Installing a private cluster on an existing VPC : You can install a private cluster on an existing GCP VPC. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on GCP infrastructure that you provision, by using one of the following methods: Installing a cluster on GCP with user-provisioned infrastructure : You can install OpenShift Container Platform on GCP infrastructure that you provide. You can use the provided Deployment Manager templates to assist with the installation. Installing a cluster with shared VPC on user-provisioned infrastructure in GCP : You can use the provided Deployment Manager templates to create GCP resources in a shared VPC infrastructure. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure : You can install OpenShift Container Platform on GCP in a restricted network with user-provisioned infrastructure. By creating an internal mirror of the installation release content, you can install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 1.4. steps Configuring a GCP project
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_gcp/preparing-to-install-on-gcp
8.156. openldap
8.156. openldap 8.156.1. RHBA-2014:1426 - openldap bug fix and enhancement update Updated openldap packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. OpenLDAP is an open source suite of LDAP (Lightweight Directory Access Protocol) applications and development tools. LDAP is a set of protocols for accessing directory services (usually phone book style information, but other information is possible) over the Internet, similar to the way DNS (Domain Name System) information is propagated over the Internet. The openldap package contains configuration files, libraries, and documentation for OpenLDAP. This update also fixes the following bug: Note The openldap packages have been upgraded to upstream version 2.4.39, which provides a number of bug fixes and enhancements over the version. Specifically, Memory-mapped database library (LMDB) support in OpenLDAP has been enabled. (BZ# 923680 ) This update also fixes the following bug: Users of openldap are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/openldap
Chapter 2. Configuring the Cluster Samples Operator
Chapter 2. Configuring the Cluster Samples Operator The Cluster Samples Operator, which operates in the openshift namespace, installs and updates the Red Hat Enterprise Linux (RHEL)-based OpenShift Container Platform image streams and OpenShift Container Platform templates. 2.1. Understanding the Cluster Samples Operator During installation, the Operator creates the default configuration object for itself and then creates the sample image streams and templates, including quick start templates. Note To facilitate image stream imports from other registries that require credentials, a cluster administrator can create any additional secrets that contain the content of a Docker config.json file in the openshift namespace needed for image import. The Cluster Samples Operator configuration is a cluster-wide resource, and the deployment is contained within the openshift-cluster-samples-operator namespace. The image for the Cluster Samples Operator contains image stream and template definitions for the associated OpenShift Container Platform release. When each sample is created or updated, the Cluster Samples Operator includes an annotation that denotes the version of OpenShift Container Platform. The Operator uses this annotation to ensure that each sample matches the release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator, where that version annotation is modified or deleted, are reverted automatically. Note The Jenkins images are part of the image payload from installation and are tagged into the image streams directly. The Cluster Samples Operator configuration resource includes a finalizer which cleans up the following upon deletion: Operator managed image streams. Operator managed templates. Operator generated configuration resources. Cluster status resources. Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration. 2.1.1. Cluster Samples Operator's use of management state The Cluster Samples Operator is bootstrapped as Managed by default or if global proxy is configured. In the Managed state, the Cluster Samples Operator is actively managing its resources and keeping the component active in order to pull sample image streams and images from the registry and ensure that the requisite sample templates are installed. Certain circumstances result in the Cluster Samples Operator bootstrapping itself as Removed including: If the Cluster Samples Operator cannot reach registry.redhat.io after three minutes on initial startup after a clean installation. If the Cluster Samples Operator detects it is on an IPv6 network. If the image controller configuration parameters prevent the creation of image streams by using the default image registry, or by using the image registry specified by the samplesRegistry setting . Note For OpenShift Container Platform, the default image registry is registry.redhat.io . However, if the Cluster Samples Operator detects that it is on an IPv6 network and an OpenShift Container Platform global proxy is configured, then IPv6 check supersedes all the checks. As a result, the Cluster Samples Operator bootstraps itself as Removed . Important IPv6 installations are not currently supported by registry.redhat.io . The Cluster Samples Operator pulls most of the sample image streams and images from registry.redhat.io . 2.1.1.1. Restricted network installation Boostrapping as Removed when unable to access registry.redhat.io facilitates restricted network installations when the network restriction is already in place. Bootstrapping as Removed when network access is restricted allows the cluster administrator more time to decide if samples are desired, because the Cluster Samples Operator does not submit alerts that sample image stream imports are failing when the management state is set to Removed . When the Cluster Samples Operator comes up as Managed and attempts to install sample image streams, it starts alerting two hours after initial installation if there are failing imports. 2.1.1.2. Restricted network installation with initial network access Conversely, if a cluster that is intended to be a restricted network or disconnected cluster is first installed while network access exists, the Cluster Samples Operator installs the content from registry.redhat.io since it can access it. If you want the Cluster Samples Operator to still bootstrap as Removed in order to defer samples installation until you have decided which samples are desired, set up image mirrors, and so on, then follow the instructions for using the Samples Operator with an alternate registry and customizing nodes, both linked in the additional resources section, to override the Cluster Samples Operator default configuration and initially come up as Removed . You must put the following additional YAML file in the openshift directory created by openshift-install create manifest : Example Cluster Samples Operator YAML file with managementState: Removed apiVersion: samples.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: architectures: - x86_64 managementState: Removed 2.1.2. Cluster Samples Operator's tracking and error recovery of image stream imports After creation or update of a samples image stream, the Cluster Samples Operator monitors the progress of each image stream tag's image import. If an import fails, the Cluster Samples Operator retries the import through the image stream image import API, which is the same API used by the oc import-image command, approximately every 15 minutes until it sees the import succeed, or if the Cluster Samples Operator's configuration is changed such that either the image stream is added to the skippedImagestreams list, or the management state is changed to Removed . Additional resources If the Cluster Samples Operator is removed during installation, you can use the Cluster Samples Operator with an alternate registry so content can be imported, and then set the Cluster Samples Operator to Managed to get the samples. To ensure the Cluster Samples Operator bootstraps as Removed in a restricted network installation with initial network access to defer samples installation until you have decided which samples are desired, follow the instructions for customizing nodes to override the Cluster Samples Operator default configuration and initially come up as Removed . To host samples in your disconnected environment, follow the instructions for using the Cluster Samples Operator with an alternate registry . 2.1.3. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. See Using Cluster Samples Operator image streams with alternate or mirrored registries for a detailed procedure. 2.2. Cluster Samples Operator configuration parameters The samples resource offers the following configuration fields: Parameter Description managementState Managed : The Cluster Samples Operator updates the samples as the configuration dictates. Unmanaged : The Cluster Samples Operator ignores updates to its configuration resource object and any image streams or templates in the openshift namespace. Removed : The Cluster Samples Operator removes the set of Managed image streams and templates in the openshift namespace. It ignores new samples created by the cluster administrator or any samples in the skipped lists. After the removals are complete, the Cluster Samples Operator works like it is in the Unmanaged state and ignores any watch events on the sample resources, image streams, or templates. samplesRegistry Allows you to specify which registry is accessed by image streams for their image content. samplesRegistry defaults to registry.redhat.io for OpenShift Container Platform. Note Creation or update of RHEL content does not commence if the secret for pull access is not in place when either Samples Registry is not explicitly set, leaving an empty string, or when it is set to registry.redhat.io. In both cases, image imports work off of registry.redhat.io, which requires credentials. Creation or update of RHEL content is not gated by the existence of the pull secret if the Samples Registry is overridden to a value other than the empty string or registry.redhat.io. architectures Placeholder to choose an architecture type. skippedImagestreams Image streams that are in the Cluster Samples Operator's inventory but that the cluster administrator wants the Operator to ignore or not manage. You can add a list of image stream names to this parameter. For example, ["httpd","perl"] . skippedTemplates Templates that are in the Cluster Samples Operator's inventory, but that the cluster administrator wants the Operator to ignore or not manage. Secret, image stream, and template watch events can come in before the initial samples resource object is created, the Cluster Samples Operator detects and re-queues the event. 2.2.1. Configuration restrictions When the Cluster Samples Operator starts supporting multiple architectures, the architecture list is not allowed to be changed while in the Managed state. To change the architectures values, a cluster administrator must: Mark the Management State as Removed , saving the change. In a subsequent change, edit the architecture and change the Management State back to Managed . The Cluster Samples Operator still processes secrets while in Removed state. You can create the secret before switching to Removed , while in Removed before switching to Managed , or after switching to Managed state. There are delays in creating the samples until the secret event is processed if you create the secret after switching to Managed . This helps facilitate the changing of the registry, where you choose to remove all the samples before switching to insure a clean slate. Removing all samples before switching is not required. 2.2.2. Conditions The samples resource maintains the following conditions in its status: Condition Description SamplesExists Indicates the samples are created in the openshift namespace. ImageChangesInProgress True when image streams are created or updated, but not all of the tag spec generations and tag status generations match. False when all of the generations match, or unrecoverable errors occurred during import, the last seen error is in the message field. The list of pending image streams is in the reason field. This condition is deprecated in OpenShift Container Platform. ConfigurationValid True or False based on whether any of the restricted changes noted previously are submitted. RemovePending Indicator that there is a Management State: Removed setting pending, but the Cluster Samples Operator is waiting for the deletions to complete. ImportImageErrorsExist Indicator of which image streams had errors during the image import phase for one of their tags. True when an error has occurred. The list of image streams with an error is in the reason field. The details of each error reported are in the message field. MigrationInProgress True when the Cluster Samples Operator detects that the version is different than the Cluster Samples Operator version with which the current samples set are installed. This condition is deprecated in OpenShift Container Platform. 2.3. Accessing the Cluster Samples Operator configuration You can configure the Cluster Samples Operator by editing the file with the provided parameters. Prerequisites Install the OpenShift CLI ( oc ). Procedure Access the Cluster Samples Operator configuration: USD oc edit configs.samples.operator.openshift.io/cluster -o yaml The Cluster Samples Operator configuration resembles the following example: apiVersion: samples.operator.openshift.io/v1 kind: Config # ... 2.4. Removing deprecated image stream tags from the Cluster Samples Operator The Cluster Samples Operator leaves deprecated image stream tags in an image stream because users can have deployments that use the deprecated image stream tags. You can remove deprecated image stream tags by editing the image stream with the oc tag command. Note Deprecated image stream tags that the samples providers have removed from their image streams are not included on initial installations. Prerequisites You installed the oc CLI. Procedure Remove deprecated image stream tags by editing the image stream with the oc tag command. USD oc tag -d <image_stream_name:tag> Example output Deleted tag default/<image_stream_name:tag>. Additional resources For more information about configuring credentials, see Using image pull secrets .
[ "apiVersion: samples.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: architectures: - x86_64 managementState: Removed", "oc edit configs.samples.operator.openshift.io/cluster -o yaml", "apiVersion: samples.operator.openshift.io/v1 kind: Config", "oc tag -d <image_stream_name:tag>", "Deleted tag default/<image_stream_name:tag>." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/images/configuring-samples-operator
23.18. Storage Pools
23.18. Storage Pools Although all storage pool back-ends share the same public APIs and XML format, they have varying levels of capabilities. Some may allow creation of volumes, others may only allow use of pre-existing volumes. Some may have constraints on volume size, or placement. The top level element for a storage pool document is <pool> . It has a single attribute type , which can take the following values: dir, fs, netfs, disk, iscsi, logical, scsi, mpath, rbd, sheepdog , or gluster . 23.18.1. Providing Metadata for the Storage Pool The following XML example, shows the metadata tags that can be added to a storage pool. In this example, the pool is an iSCSI storage pool. <pool type="iscsi"> <name>virtimages</name> <uuid>3e3fce45-4f53-4fa7-bb32-11f34168b82b</uuid> <allocation>10000000</allocation> <capacity>50000000</capacity> <available>40000000</available> ... </pool> Figure 23.79. General metadata tags The elements that are used in this example are explained in the Table 23.27, " virt-sysprep commands" . Table 23.27. virt-sysprep commands Element Description <name> Provides a name for the storage pool which must be unique to the host physical machine. This is mandatory when defining a storage pool. <uuid> Provides an identifier for the storage pool which must be globally unique. Although supplying the UUID is optional, if the UUID is not provided at the time the storage pool is created, a UUID will be automatically generated. <allocation> Provides the total storage allocation for the storage pool. This may be larger than the sum of the total allocation across all storage volumes due to the metadata overhead. This value is expressed in bytes. This element is read-only and the value should not be changed. <capacity> Provides the total storage capacity for the pool. Due to underlying device constraints, it may not be possible to use the full capacity for storage volumes. This value is in bytes. This element is read-only and the value should not be changed. <available> Provides the free space available for allocating new storage volumes in the storage pool. Due to underlying device constraints, it may not be possible to allocate the all of the free space to a single storage volume. This value is in bytes. This element is read-only and the value should not be changed. 23.18.2. Source Elements Within the <pool> element there can be a single <source> element defined (only one). The child elements of <source> depend on the storage pool type. Some examples of the XML that can be used are as follows: ... <source> <host name="iscsi.example.com"/> <device path="demo-target"/> <auth type='chap' username='myname'> <secret type='iscsi' usage='mycluster_myname'/> </auth> <vendor name="Acme"/> <product name="model"/> </source> ... Figure 23.80. Source element option 1 ... <source> <adapter type='fc_host' parent='scsi_host5' wwnn='20000000c9831b4b' wwpn='10000000c9831b4b'/> </source> ... Figure 23.81. Source element option 2 The child elements that are accepted by <source> are explained in Table 23.28, "Source child elements commands" . Table 23.28. Source child elements commands Element Description <device> Provides the source for storage pools backed by host physical machine devices (based on <pool type=> (as shown in Section 23.18, "Storage Pools" )). May be repeated multiple times depending on back-end driver. Contains a single attribute path which is the fully qualified path to the block device node. <dir> Provides the source for storage pools backed by directories ( <pool type='dir'> ), or optionally to select a subdirectory within a storage pool that is based on a filesystem ( <pool type='gluster'> ). This element may only occur once per ( <pool> ). This element accepts a single attribute ( <path> ) which is the full path to the backing directory. <adapter> Provides the source for storage pools backed by SCSI adapters ( <pool type='scsi'> ). This element may only occur once per ( <pool> ). Attribute name is the SCSI adapter name (ex. "scsi_host1". Although "host1" is still supported for backwards compatibility, it is not recommended. Attribute type specifies the adapter type. Valid values are 'fc_host'| 'scsi_host' . If omitted and the name attribute is specified, then it defaults to type='scsi_host' . To keep backwards compatibility, the attribute type is optional for the type='scsi_host' adapter, but mandatory for the type='fc_host' adapter. Attributes wwnn (Word Wide Node Name) and wwpn (Word Wide Port Name) are used by the type='fc_host' adapter to uniquely identify the device in the Fibre Channel storage fabric (the device can be either a HBA or vHBA). Both wwnn and wwpn should be specified. For instructions on how to get wwnn/wwpn of a (v)HBA, see Section 20.27.11, "Collect Device Configuration Settings" . The optional attribute parent specifies the parent device for the type='fc_host' adapter. <host> Provides the source for storage pools backed by storage from a remote server ( type='netfs'|'iscsi'|'rbd'|'sheepdog'|'gluster' ). This element should be used in combination with a <directory> or <device> element. Contains an attribute name which is the host name or IP address of the server. May optionally contain a port attribute for the protocol specific port number. <auth> If present, the <auth> element provides the authentication credentials needed to access the source by the setting of the type attribute (pool type='iscsi'|'rbd' ). The type must be either type='chap' or type='ceph' . Use "ceph" for Ceph RBD (Rados Block Device) network sources and use "iscsi" for CHAP (Challenge-Handshake Authentication Protocol) iSCSI targets. Additionally a mandatory attribute username identifies the user name to use during authentication as well as a sub-element secret with a mandatory attribute type, to tie back to a libvirt secret object that holds the actual password or other credentials. The domain XML intentionally does not expose the password, only the reference to the object that manages the password. The secret element requires either a uuid attribute with the UUID of the secret object or a usage attribute matching the key that was specified in the secret object. <name> Provides the source for storage pools backed by a storage device from a named element <type> which can take the values: ( type='logical'|'rbd'|'sheepdog','gluster' ). <format> Provides information about the format of the storage pool <type> which can take the following values: type='logical'|'disk'|'fs'|'netfs' ). Note that this value is back-end specific. This is typically used to indicate a filesystem type, or a network filesystem type, or a partition table type, or an LVM metadata type. As all drivers are required to have a default value for this, the element is optional. <vendor> Provides optional information about the vendor of the storage device. This contains a single attribute <name> whose value is back-end specific. <product> Provides optional information about the product name of the storage device. This contains a single attribute <name> whose value is back-end specific. 23.18.3. Creating Target Elements A single <target> element is contained within the top level <pool> element for the following types: ( type='dir'|'fs'|'netfs'|'logical'|'disk'|'iscsi'|'scsi'|'mpath' ). This tag is used to describe the mapping of the storage pool into the host filesystem. It can contain the following child elements: <pool> ... <target> <path>/dev/disk/by-path</path> <permissions> <owner>107</owner> <group>107</group> <mode>0744</mode> <label>virt_image_t</label> </permissions> <timestamps> <atime>1341933637.273190990</atime> <mtime>1341930622.047245868</mtime> <ctime>1341930622.047245868</ctime> </timestamps> <encryption type='...'> ... </encryption> </target> </pool> Figure 23.82. Target elements XML example The table ( Table 23.29, "Target child elements" ) explains the child elements that are valid for the parent <target> element: Table 23.29. Target child elements Element Description <path> Provides the location at which the storage pool will be mapped into the local filesystem namespace. For a filesystem or directory-based storage pool it will be the name of the directory in which storage volumes will be created. For device-based storage pools it will be the name of the directory in which the device's nodes exist. For the latter, /dev/ may seem like the logical choice, however, the device's nodes there are not guaranteed to be stable across reboots, since they are allocated on demand. It is preferable to use a stable location such as one of the /dev/disk/by-{path,id,uuid,label} locations. <permissions> This is currently only useful for directory- or filesystem-based storage pools, which are mapped as a directory into the local filesystem namespace. It provides information about the permissions to use for the final directory when the storage pool is built. The <mode> element contains the octal permission set. The <owner> element contains the numeric user ID. The <group> element contains the numeric group ID. The <label> element contains the MAC (for example, SELinux) label string. <timestamps> Provides timing information about the storage volume. Up to four sub-elements are present, where timestamps='atime'|'btime|'ctime'|'mtime' holds the access, birth, change, and modification time of the storage volume, where known. The used time format is <seconds> . <nanoseconds> since the beginning of the epoch (1 Jan 1970). If nanosecond resolution is 0 or otherwise unsupported by the host operating system or filesystem, then the nanoseconds part is omitted. This is a read-only attribute and is ignored when creating a storage volume. <encryption> If present, specifies how the storage volume is encrypted. For more information, see libvirt upstream pages . 23.18.4. Setting Device Extents If a storage pool exposes information about its underlying placement or allocation scheme, the <device> element within the <source> element may contain information about its available extents. Some storage pools have a constraint that a storage volume must be allocated entirely within a single constraint (such as disk partition pools). Thus, the extent information allows an application to determine the maximum possible size for a new storage volume. For storage pools supporting extent information, within each <device> element there will be zero or more <freeExtent> elements. Each of these elements contains two attributes, <start> and <end> which provide the boundaries of the extent on the device, measured in bytes.
[ "<pool type=\"iscsi\"> <name>virtimages</name> <uuid>3e3fce45-4f53-4fa7-bb32-11f34168b82b</uuid> <allocation>10000000</allocation> <capacity>50000000</capacity> <available>40000000</available> </pool>", "<source> <host name=\"iscsi.example.com\"/> <device path=\"demo-target\"/> <auth type='chap' username='myname'> <secret type='iscsi' usage='mycluster_myname'/> </auth> <vendor name=\"Acme\"/> <product name=\"model\"/> </source>", "<source> <adapter type='fc_host' parent='scsi_host5' wwnn='20000000c9831b4b' wwpn='10000000c9831b4b'/> </source>", "<pool> <target> <path>/dev/disk/by-path</path> <permissions> <owner>107</owner> <group>107</group> <mode>0744</mode> <label>virt_image_t</label> </permissions> <timestamps> <atime>1341933637.273190990</atime> <mtime>1341930622.047245868</mtime> <ctime>1341930622.047245868</ctime> </timestamps> <encryption type='...'> </encryption> </target> </pool>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-Storage_pools
Template APIs
Template APIs OpenShift Container Platform 4.18 Reference guide for template APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/template_apis/index
13.3.2. Let the Installer Prompt You for a Driver Update
13.3.2. Let the Installer Prompt You for a Driver Update Begin the installation normally for whatever method you have chosen. If the installer cannot load drivers for a piece of hardware that is essential for the installation process (for example, if it cannot detect any network or storage controllers), it prompts you to insert a driver update disk: Figure 13.5. The no driver found dialog Select Use a driver disk and refer to Section 13.4, "Specifying the Location of a Driver Update Image File or a Driver Update Disk" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-driver_updates-let_the_installer_prompt_you_for_a_driver_update-ppc
Chapter 6. Configuring Capsule Servers with custom SSL certificates for load balancing (with Puppet)
Chapter 6. Configuring Capsule Servers with custom SSL certificates for load balancing (with Puppet) You can configure one or more Capsule Servers that use custom SSL certificates for load balancing. 6.1. Prerequisites Prepare a new Capsule Server to use for load balancing. See Chapter 2, Preparing Capsule Servers for load balancing . Review Section 1.2, "Services and features supported in a load-balanced setup" . 6.2. Creating a custom SSL certificate for Capsule Server On each Capsule Server you want to configure for load balancing, create a configuration file for the Certificate Signing Request and include the load balancer and Capsule Server as Subject Alternative Names (SAN). Procedure To store all the source certificate files, create a directory that is accessible only to the root user: Create a private key with which to sign the certificate signing request (CSR). Note that the private key must be unencrypted. If you use a password-protected private key, remove the private key password. If you already have a private key for this Capsule Server, skip this step. Create the /root/capsule_cert/openssl.cnf configuration file for the CSR and include the following content: 1 The certificate's common name must match the FQDN of Capsule Server. Ensure to change this when running the command on each Capsule Server that you configure for load balancing. You can also set a wildcard value * . If you set a wildcard value, you must add the -t capsule option when you use the katello-certs-check command. 2 Under [alt_names] , include the FQDN of the load balancer as DNS.1 and the FQDN of Capsule Server as DNS.2 . Optional: If you want to add Distinguished Name (DN) details to the CSR, add the following information to the [ req_distinguished_name ] section: 1 Two letter code 2 Full name 3 Full name (example: New York) 4 Division responsible for the certificate (example: IT department) Generate CSR: 1 Path to the private key 2 Path to the configuration file 3 Path to the CSR to generate Send the certificate signing request to the certificate authority (CA). The same CA must sign certificates for Satellite Server and Capsule Server. When you submit the request, specify the lifespan of the certificate. The method for sending the certificate request varies, so consult the CA for the preferred method. In response to the request, you can expect to receive a CA bundle and a signed certificate, in separate files. Copy the Certificate Authority bundle and Capsule Server certificate file that you receive from the Certificate Authority, and Capsule Server private key to your Satellite Server. On Satellite Server, validate Capsule Server certificate input files: 1 Capsule Server certificate file, provided by your Certificate Authority 2 Capsule Server's private key that you used to sign the certificate 3 Certificate Authority bundle, provided by your Certificate Authority If you set the commonName= to a wildcard value * , you must add the -t capsule option to the katello-certs-check command. Retain a copy of the example capsule-certs-generate command that is output by the katello-certs-check command for creating the Certificate Archive File for this Capsule Server. 6.3. Configuring Capsule Server with custom SSL certificates to generate and sign Puppet certificates On the Capsule Server that will generate Puppet certificates for all other load-balancing Capsule Servers, configure Puppet certificate generation and signing. Procedure Append the following option to the capsule-certs-generate command that you obtain from the output of the katello-certs-check command: On Satellite Server, enter the capsule-certs-generate command to generate Capsule certificates: Retain a copy of the example satellite-installer command from the output for installing Capsule Server certificates. Copy the certificate archive file from Satellite Server to Capsule Server. Append the following options to the satellite-installer command that you obtain from the output of the capsule-certs-generate command: On Capsule Server, enter the satellite-installer command: On Capsule Server that is the Puppetserver Certificate Authority, stop the Puppet server: Generate Puppet certificates for all other Capsule Servers that you configure for load balancing, except the system where you first configured Puppet certificate signing: This command creates the following files: /etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem /etc/puppetlabs/puppetserver/ca/signed/ capsule.example.com .pem Start the Puppet server: 6.4. Configuring remaining Capsule Servers with custom SSL certificates for load balancing On each load-balancing Capsule Server, excluding the Capsule Server configured to sign Puppet certificates, configure the system to use Puppet certificates. Procedure Append the following option to the capsule-certs-generate command that you obtain from the output of the katello-certs-check command: On Satellite Server, enter the capsule-certs-generate command to generate Capsule certificates: Retain a copy of the example satellite-installer command from the output for installing Capsule Server certificates. Copy the certificate archive file from Satellite Server to Capsule Server. On Capsule Server, install the puppetserver package: On Capsule Server, create directories for puppet certificates: On Capsule Server, copy the Puppet certificates for this Capsule Server from the system where you configure Capsule Server to sign Puppet certificates: On Capsule Server, change the /etc/puppetlabs/puppet/ssl/ directory ownership to user puppet and group puppet : On Capsule Server, set the SELinux context for the /etc/puppetlabs/puppet/ssl/ directory: Append the following options to the satellite-installer command that you obtain from the output of the capsule-certs-generate command: On Capsule Server, enter the satellite-installer command: 6.5. Managing Puppet limitations with load balancing in Satellite If you use Puppet, Puppet certificate signing is assigned to the first Capsule that you configure. If the first Capsule is down, hosts cannot obtain Puppet content. Puppet Certificate Authority (CA) management does not support certificate signing in a load-balanced setup. Puppet CA stores certificate information, such as the serial number counter and CRL, on the file system. Multiple writer processes that attempt to use the same data can corrupt it. To manage this Puppet limitation, complete the following steps: Configure Puppet certificate signing on one Capsule Server, typically the first system where you configure Capsule Server for load balancing. Configure the clients to send CA requests to port 8141 on a load balancer. Configure a load balancer to redirect CA requests from port 8141 to port 8140 on the system where you configure Capsule Server to sign Puppet certificates. To troubleshoot issues, reproduce the issue on each Capsule, bypassing the load balancer. This solution does not use Pacemaker or other similar HA tools to maintain one state across all Capsules.
[ "mkdir /root/capsule_cert", "openssl genrsa -out /root/capsule_cert/capsule_cert_key.pem 4096", "[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name x509_extensions = usr_cert prompt = no [ req_distinguished_name ] commonName = capsule.example.com 1 [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [alt_names] 2 DNS.1 = loadbalancer.example.com DNS.2 = capsule.example.com", "[req_distinguished_name] CN = capsule.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4", "openssl req -new -key /root/capsule_cert/capsule_cert_key.pem \\ 1 -config /root/capsule_cert/openssl.cnf \\ 2 -out /root/capsule_cert/capsule_cert_csr.pem 3", "katello-certs-check -c /root/capsule_cert/capsule_cert.pem \\ 1 -k /root/capsule_cert/capsule_cert_key.pem \\ 2 -b /root/capsule_cert/ca_cert_bundle.pem 3", "--foreman-proxy-cname loadbalancer.example.com", "capsule-certs-generate --certs-tar /root/capsule_cert/capsule-ca.tar --foreman-proxy-cname loadbalancer.example.com --foreman-proxy-fqdn capsule-ca.example.com --server-ca-cert /root/capsule_cert/ca_cert_bundle.pem --server-cert /root/capsule_cert/capsule-ca.pem --server-key /root/capsule_cert/capsule-ca.pem", "--enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-puppetca \"true\" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"true\"", "satellite-installer --scenario capsule --certs-cname \" loadbalancer.example.com \" --certs-tar-file \" certs.tgz \" --enable-foreman-proxy-plugin-remote-execution-script --enable-puppet --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-oauth-consumer-key \"oauth key\" --foreman-proxy-oauth-consumer-secret \"oauth secret\" --foreman-proxy-puppetca \"true\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule-ca.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server true --puppet-server-ca \"true\"", "systemctl stop puppetserver", "puppetserver ca generate --ca-client --certname capsule.example.com --subject-alt-names loadbalancer.example.com", "systemctl start puppetserver", "--foreman-proxy-cname loadbalancer.example.com", "capsule-certs-generate --certs-tar /root/capsule_cert/capsule.tar --foreman-proxy-cname loadbalancer.example.com --foreman-proxy-fqdn capsule.example.com --server-ca-cert /root/capsule_cert/ca_cert_bundle.pem --server-cert /root/capsule_cert/capsule.pem --server-key /root/capsule_cert/capsule.pem", "scp /root/ capsule.example.com -certs.tar root@ capsule.example.com : capsule.example.com -certs.tar", "satellite-maintain packages install puppetserver", "mkdir -p /etc/puppetlabs/puppet/ssl/certs/ /etc/puppetlabs/puppet/ssl/private_keys/ /etc/puppetlabs/puppet/ssl/public_keys/", "scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ca.pem /etc/puppetlabs/puppet/ssl/certs/ca.pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem", "chown -R puppet:puppet /etc/puppetlabs/puppet/ssl/", "restorecon -Rv /etc/puppetlabs/puppet/ssl/", "--certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-puppetca \"false\" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"false\"", "satellite-installer --scenario capsule --certs-cname \" loadbalancer.example.com \" --certs-tar-file \" capsule.example.com-certs.tar \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --foreman-proxy-puppetca \"false\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"false\"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_capsules_with_a_load_balancer/configuring-capsule-servers-with-custom-ssl-certificates-for-load-balancing-with-puppet_load-balancing
Chapter 16. The Rapid Stock Market Quickstart
Chapter 16. The Rapid Stock Market Quickstart The Rapid Stock Market quickstart demonstrates how JBoss Data Grid's compatibility mode works with a Hot Rod client (to store data) and a HTTP client using REST (to retrieve data). This quickstart is only available in JBoss Data Grid's Remote Client-Server mode and does not use any containers. The Rapid Stock Market quickstart includes a server-side and a client-side application. Report a bug 16.1. Build and Run the Rapid Stock Market Quickstart The Rapid Stock Market quickstart requires the following configuration for the server and client sides of the application. Procedure 16.1. Rapid Stock Market Quickstart Server-side Configuration Navigate to the Root Directory Open a command line and navigate to the root directory of this quickstart. Build a server module for the JBoss Data Grid Server by packaging a class that is common for the client and server in a jar file: Place the new jar file in a directory structure that is similar to the server module. Install the server module into the server. Copy the prepared module to the server: Add the new module as a dependency of the org.infinispan.commons module by adding the following into the modules/system/layers/base/org/infinispan/commons/main/module.xml file: Build the application: Configure the JBoss Data Grid to use the appropriate configuration file. Copy the example configuration file for compatibility mode to a location where the JBoss Data Grid Server can locate and use it: Remove the security-domain and auth-method attributes from the rest-connector element to disable REST security. Start the JBoss Data Grid Server in compatibility mode: Procedure 16.2. Rapid Stock Market Quickstart Client-side Configuration In a new command line terminal window, start the client-side application: Use the instructions in the help menu for the client application. Report a bug
[ "mvn clean package -Pprepare-server-module", "cp -r target/modules USD{JDG_SERVER_HOME}/", "<module name=\"org.infinispan.quickstart.compatibility.common\"/>", "mvn clean package", "cp USD{JDG_SERVER_HOME}/docs/examples/configs/standalone-compatibility-mode.xml USD{JDG_SERVER_HOME}/standalone/configuration", "USD{JDG_SERVER_HOME}/bin/standalone.sh -c standalone-compatibility-mode.xml", "mvn exec:java -Pclient" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-The_Rapid_Stock_Market_Quickstart
12.2. Container Orchestration
12.2. Container Orchestration Red Hat Enterprise Linux Atomic Host 7.1.5 and Red Hat Enterprise Linux 7.1 include the following updates: kubernetes-1.0.3-0.1.gitb9a88a7.el7 The new kubernetes-client subpackage which provides the kubectl command has been added to the kubernetes component. etcd-2.1.1-2.el7 etcd now provides improved performance when using the peer TLS protocol. Red Hat Enterprise Linux Atomic Host 7.1.4 and Red Hat Enterprise Linux 7.1 include the following updates: kubernetes-1.0.0-0.8.gitb2dafda.el7 You can now set up a Kubernetes cluster using the Ansible automation platform. Red Hat Enterprise Linux Atomic Host 7.1.3 and Red Hat Enterprise Linux 7.1 include the following updates: kubernetes-0.17.1-4.el7 kubernetes nodes no longer need to be explicitly created in the API server, they will automatically join and register themselves. NFS, GlusterFS and Ceph block plugins have been added to Red Hat Enterprise Linux, and NFS support has been added to Red Hat Enterprise Linux Atomic Host. etcd-2.0.11-2.el7 Fixed bugs with adding or removing cluster members, performance and resource usage improvements. The GOMAXPROCS environment variable has been set to use the maximum number of available processors on a system, now etcd will use all processors concurrently. The configuration file must be updated to include the -advertise-client-urls flag when setting the -listen-client-urls flag. Red Hat Enterprise Linux Atomic Host 7.1.2 and Red Hat Enterprise Linux 7.1 include the following updates: kubernetes-0.15.0-0.3.git0ea87e4.el7 Enabled the v1beta3 API and sets it as the default API version. Added multi-services. The Kubelet now listens on a secure HTTPS port. The API server now supports client certificate authentication. Enabled log collection from the master pod. New volume support: iSCSI volume plug-in, GlusterFS volume plug-in, Amazon Elastic Block Store (Amazon EBS) volume support. Fixed the NFS volume plug-in * configure scheduler using JSON. Improved messages on scheduler failure. Improved messages on port conflicts. Improved responsiveness of the master when creating new pods. Added support for inter-process communication (IPC) namespaces. The --etcd_config_file and --etcd_servers options have been removed from the kube-proxy utility; use the --master option instead. etcd-2.0.9-2.el7 The configuration file format has changed significantly; using old configuration files will cause upgrades of etcd to fail. The etcdctl command now supports importing hidden keys from the given snapshot. Added support for IPv6. The etcd proxy no longer fails to restart after initial configuration. The -initial-cluster flag is no longer required when bootstrapping a single member cluster with the -name flag set. etcd 2 now uses its own implementation of the Raft distributed consensus protocol; versions of etcd used the goraft implementation. Added the etcdctl import command to import the migration snap generated in etcd 0.4.8 to the etcd cluster version 2.0. The etcdctl utility now takes port 2379 as its default port. The cadvisor package has been obsoleted by the kubernetes package. The functionality of cadvisor is now part of the kubelet sub-package. Red Hat Enterprise Linux 7.1 includes support for orchestration Linux Containers built using docker technology via kubernetes , flannel and etcd . Red Hat Enterprise Linux Atomic Host 7.1.1 and Red Hat Enterprise Linux 7.1 include the following updates: etcd 0.4.6-0.13.el7 - a new command, etcdctl was added to make browsing and editing etcd easier for a system administrator. flannel 0.2.0-7.el7 - a bug fix to support delaying startup until after network interfaces are up. For more information see Get Started Orchestrating Containers with Kubernetes .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/sect-red_hat_enterprise_linux-7.1_release_notes-linux_containers_with_docker_format-container_orchestration
Chapter 35. Associating secondary interfaces metrics to network attachments
Chapter 35. Associating secondary interfaces metrics to network attachments 35.1. Extending secondary network metrics for monitoring Secondary devices, or interfaces, are used for different purposes. It is important to have a way to classify them to be able to aggregate the metrics for secondary devices with the same classification. Exposed metrics contain the interface but do not specify where the interface originates. This is workable when there are no additional interfaces. However, if secondary interfaces are added, it can be difficult to use the metrics since it is hard to identify interfaces using only interface names. When adding secondary interfaces, their names depend on the order in which they are added, and different secondary interfaces might belong to different networks and can be used for different purposes. With pod_network_name_info it is possible to extend the current metrics with additional information that identifies the interface type. In this way, it is possible to aggregate the metrics and to add specific alarms to specific interface types. The network type is generated using the name of the related NetworkAttachmentDefinition , that in turn is used to differentiate different classes of secondary networks. For example, different interfaces belonging to different networks or using different CNIs use different network attachment definition names. 35.1.1. Network Metrics Daemon The Network Metrics Daemon is a daemon component that collects and publishes network related metrics. The kubelet is already publishing network related metrics you can observe. These metrics are: container_network_receive_bytes_total container_network_receive_errors_total container_network_receive_packets_total container_network_receive_packets_dropped_total container_network_transmit_bytes_total container_network_transmit_errors_total container_network_transmit_packets_total container_network_transmit_packets_dropped_total The labels in these metrics contain, among others: Pod name Pod namespace Interface name (such as eth0 ) These metrics work well until new interfaces are added to the pod, for example via Multus , as it is not clear what the interface names refer to. The interface label refers to the interface name, but it is not clear what that interface is meant for. In case of many different interfaces, it would be impossible to understand what network the metrics you are monitoring refer to. This is addressed by introducing the new pod_network_name_info described in the following section. 35.1.2. Metrics with network name This daemonset publishes a pod_network_name_info gauge metric, with a fixed value of 0 : pod_network_name_info{interface="net0",namespace="namespacename",network_name="nadnamespace/firstNAD",pod="podname"} 0 The network name label is produced using the annotation added by Multus. It is the concatenation of the namespace the network attachment definition belongs to, plus the name of the network attachment definition. The new metric alone does not provide much value, but combined with the network related container_network_* metrics, it offers better support for monitoring secondary networks. Using a promql query like the following ones, it is possible to get a new metric containing the value and the network name retrieved from the k8s.v1.cni.cncf.io/network-status annotation: (container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)
[ "pod_network_name_info{interface=\"net0\",namespace=\"namespacename\",network_name=\"nadnamespace/firstNAD\",pod=\"podname\"} 0", "(container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/associating-secondary-interfaces-metrics-to-network-attachments
Chapter 1. Planning for securing applications and services
Chapter 1. Planning for securing applications and services As an OAuth2, OpenID Connect, and SAML compliant server, Red Hat build of Keycloak can secure any application and service as long as the technology stack they are using supports any of these protocols. For more details about the security protocols supported by Red Hat build of Keycloak, consider looking at Server Administration Guide . Most of the support for some of these protocols is already available from the programming language, framework, or reverse proxy they are using. Leveraging the support already available from the application ecosystem is a key aspect to make your application fully compliant with security standards and best practices, so that you avoid vendor lock-in. For some programming languages, Red Hat build of Keycloak provides libraries that try to fill the gap for the lack of support of a particular security protocol or to provide a more rich and tightly coupled integration with the server. These libraries are known by Keycloak Client Adapters , and they should be used as a last resort if you cannot rely on what is available from the application ecosystem. 1.1. Basic steps to secure applications and services These are the basic steps for securing an application or a service in Red Hat build of Keycloak. Register a client to a realm using one of these options: The Red Hat build of Keycloak Admin Console The client registration service The CLI Enable OpenID Connect or SAML protocols in your application using one these options: Leveraging existing OpenID Connect and SAML support from the application ecosystem Using a Red Hat build of Keycloak Adapter This guide provides the detailed instructions for these steps. You can find more details in the Server Administration Guide about how to register a client to Red Hat build of Keycloak through the administration console. 1.2. Getting Started The Red Hat build of Keycloak Quickstarts Repository provides examples about how to secure applications and services using different programming languages and frameworks. By going through their documentation and codebase, you will understand the bare minimum changes required in your application and service in order to secure it with Red Hat build of Keycloak. Also, see the following sections for recommendations for trusted and well-known client-side implementations for both OpenID Connect and SAML protocols. 1.2.1. OpenID Connect 1.2.1.1. JavaScript (client-side) JavaScript 1.2.1.2. Node.js (server-side) Node.js 1.2.2. SAML 1.2.2.1. Java JBoss EAP 1.3. Terminology These terms are used in this guide: Clients are entities that interact with Red Hat build of Keycloak to authenticate users and obtain tokens. Most often, clients are applications and services acting on behalf of users that provide a single sign-on experience to their users and access other services using the tokens issued by the server. Clients can also be entities only interested in obtaining tokens and acting on their own behalf for accessing other services. Applications include a wide range of applications that work for specific platforms for each protocol Client adapters are libraries that make it easy to secure applications and services with Red Hat build of Keycloak. They provide a tight integration to the underlying platform and framework. Creating a client and registering a client are the same action. Creating a Client is the term used to create a client by using the Admin Console. Registering a client is the term used to register a client by using the Red Hat build of Keycloak Client Registration Service. A service account is a type of client that is able to obtain tokens on its own behalf.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/securing_applications_and_services_guide/planning_for_securing_applications_and_services
9.7. Using Rules to Determine Resource Location
9.7. Using Rules to Determine Resource Location You can use a rule to determine a resource's location with the following command. The expression can be one of the following: defined|not_defined attribute attribute lt|gt|lte|gte|eq|ne value date [start= start ] [end= end ] operation=gt|lt|in-range date-spec date_spec_options
[ "pcs constraint location resource_id rule [rule_id] [role=master|slave] [score= score expression ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/_using_rules_to_determine_resource_location
Chapter 2. Service Mesh 1.x
Chapter 2. Service Mesh 1.x 2.1. Service Mesh Release Notes Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . 2.1.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 2.1.2. Introduction to Red Hat OpenShift Service Mesh Red Hat OpenShift Service Mesh addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application. It adds a transparent layer on existing distributed applications without requiring any changes to the application code. Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services. Service Mesh, which is based on the open source Istio project , provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication. 2.1.3. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including virtual machines and other data related to Red Hat OpenShift Service Mesh. For prompt support, supply diagnostic information for both OpenShift Container Platform and Red Hat OpenShift Service Mesh. 2.1.3.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.0 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... 2.1.3.2. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. 2.1.3.3. About collecting service mesh data You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with Red Hat OpenShift Service Mesh. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Procedure To collect Red Hat OpenShift Service Mesh data with must-gather , you must specify the Red Hat OpenShift Service Mesh image. USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.5 To collect Red Hat OpenShift Service Mesh data for a specific Service Mesh control plane namespace with must-gather , you must specify the Red Hat OpenShift Service Mesh image and namespace. In this example, after gather, replace <namespace> with your Service Mesh control plane namespace, such as istio-system . USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.5 gather <namespace> This creates a local directory that contains the following items: The Istio Operator namespace and its child objects All control plane namespaces and their children objects All namespaces and their children objects that belong to any service mesh All Istio custom resource definitions (CRD) All Istio CRD objects, such as VirtualServices, in a given namespace All Istio webhooks 2.1.4. Red Hat OpenShift Service Mesh supported configurations The following are the only supported configurations for the Red Hat OpenShift Service Mesh: OpenShift Container Platform version 4.6 or later. Note OpenShift Online and Red Hat OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh. The deployment must be contained within a single OpenShift Container Platform cluster that is not federated. This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64. This release only supports configurations where all Service Mesh components are contained in the OpenShift Container Platform cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario. This release only supports configurations that do not integrate external services such as virtual machines. For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the Support Policy . 2.1.4.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers. 2.1.4.2. Supported Mixer adapters This release only supports the following Mixer adapter: 3scale Istio Adapter 2.1.5. New Features Red Hat OpenShift Service Mesh provides a number of key capabilities uniformly across a network of services: Traffic Management - Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions. Service Identity and Security - Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness. Policy Enforcement - Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code. Telemetry - Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues. 2.1.5.1. New features Red Hat OpenShift Service Mesh 1.1.18.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 2.1.5.1.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18.2 Component Version Istio 1.4.10 Jaeger 1.30.2 Kiali 1.12.21.1 3scale Istio Adapter 1.0.0 2.1.5.2. New features Red Hat OpenShift Service Mesh 1.1.18.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 2.1.5.2.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18.1 Component Version Istio 1.4.10 Jaeger 1.30.2 Kiali 1.12.20.1 3scale Istio Adapter 1.0.0 2.1.5.3. New features Red Hat OpenShift Service Mesh 1.1.18 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 2.1.5.3.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18 Component Version Istio 1.4.10 Jaeger 1.24.0 Kiali 1.12.18 3scale Istio Adapter 1.0.0 2.1.5.4. New features Red Hat OpenShift Service Mesh 1.1.17.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 2.1.5.4.1. Change in how Red Hat OpenShift Service Mesh handles URI fragments Red Hat OpenShift Service Mesh contains a remotely exploitable vulnerability, CVE-2021-39156 , where an HTTP request with a fragment (a section in the end of a URI that begins with a # character) in the URI path could bypass the Istio URI path-based authorization policies. For instance, an Istio authorization policy denies requests sent to the URI path /user/profile . In the vulnerable versions, a request with URI path /user/profile#section1 bypasses the deny policy and routes to the backend (with the normalized URI path /user/profile%23section1 ), possibly leading to a security incident. You are impacted by this vulnerability if you use authorization policies with DENY actions and operation.paths , or ALLOW actions and operation.notPaths . With the mitigation, the fragment part of the request's URI is removed before the authorization and routing. This prevents a request with a fragment in its URI from bypassing authorization policies which are based on the URI without the fragment part. 2.1.5.4.2. Required update for authorization policies Istio generates hostnames for both the hostname itself and all matching ports. For instance, a virtual service or Gateway for a host of "httpbin.foo" generates a config matching "httpbin.foo and httpbin.foo:*". However, exact match authorization policies only match the exact string given for the hosts or notHosts fields. Your cluster is impacted if you have AuthorizationPolicy resources using exact string comparison for the rule to determine hosts or notHosts . You must update your authorization policy rules to use prefix match instead of exact match. For example, replacing hosts: ["httpbin.com"] with hosts: ["httpbin.com:*"] in the first AuthorizationPolicy example. First example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: ["dev"] to: - operation: hosts: ["httpbin.com","httpbin.com:*"] Second example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: ["httpbin.example.com:*"] 2.1.5.5. New features Red Hat OpenShift Service Mesh 1.1.17 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.6. New features Red Hat OpenShift Service Mesh 1.1.16 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.7. New features Red Hat OpenShift Service Mesh 1.1.15 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.8. New features Red Hat OpenShift Service Mesh 1.1.14 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. Important There are manual steps that must be completed to address CVE-2021-29492 and CVE-2021-31920. 2.1.5.8.1. Manual updates required by CVE-2021-29492 and CVE-2021-31920 Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters ( %2F or %5C ) could potentially bypass an Istio authorization policy when path-based authorization rules are used. For example, assume an Istio cluster administrator defines an authorization DENY policy to reject the request at path /admin . A request sent to the URL path //admin will NOT be rejected by the authorization policy. According to RFC 3986 , the path //admin with multiple slashes should technically be treated as a different path from the /admin . However, some backend services choose to normalize the URL paths by merging multiple slashes into a single slash. This can result in a bypass of the authorization policy ( //admin does not match /admin ), and a user can access the resource at path /admin in the backend; this would represent a security incident. Your cluster is impacted by this vulnerability if you have authorization policies using ALLOW action + notPaths field or DENY action + paths field patterns. These patterns are vulnerable to unexpected policy bypasses. Your cluster is NOT impacted by this vulnerability if: You don't have authorization policies. Your authorization policies don't define paths or notPaths fields. Your authorization policies use ALLOW action + paths field or DENY action + notPaths field patterns. These patterns could only cause unexpected rejection instead of policy bypasses. The upgrade is optional for these cases. Note The Red Hat OpenShift Service Mesh configuration location for path normalization is different from the Istio configuration. 2.1.5.8.2. Updating the path normalization configuration Istio authorization policies can be based on the URL paths in the HTTP request. Path normalization , also known as URI normalization, modifies and standardizes the incoming requests' paths so that the normalized paths can be processed in a standard way. Syntactically different paths may be equivalent after path normalization. Istio supports the following normalization schemes on the request paths before evaluating against the authorization policies and routing the requests: Table 2.1. Normalization schemes Option Description Example Notes NONE No normalization is done. Anything received by Envoy will be forwarded exactly as-is to any backend service. ../%2Fa../b is evaluated by the authorization policies and sent to your service. This setting is vulnerable to CVE-2021-31920. BASE This is currently the option used in the default installation of Istio. This applies the normalize_path option on Envoy proxies, which follows RFC 3986 with extra normalization to convert backslashes to forward slashes. /a/../b is normalized to /b . \da is normalized to /da . This setting is vulnerable to CVE-2021-31920. MERGE_SLASHES Slashes are merged after the BASE normalization. /a//b is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. DECODE_AND_MERGE_SLASHES The strictest setting when you allow all traffic by default. This setting is recommended, with the caveat that you must thoroughly test your authorization policies routes. Percent-encoded slash and backslash characters ( %2F , %2f , %5C and %5c ) are decoded to / or \ , before the MERGE_SLASHES normalization. /a%2fb is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. This setting is more secure, but also has the potential to break applications. Test your applications before deploying to production. The normalization algorithms are conducted in the following order: Percent-decode %2F , %2f , %5C and %5c . The RFC 3986 and other normalization implemented by the normalize_path option in Envoy. Merge slashes. Warning While these normalization options represent recommendations from HTTP standards and common industry practices, applications may interpret a URL in any way it chooses to. When using denial policies, ensure that you understand how your application behaves. 2.1.5.8.3. Path normalization configuration examples Ensuring Envoy normalizes request paths to match your backend services' expectations is critical to the security of your system. The following examples can be used as a reference for you to configure your system. The normalized URL paths, or the original URL paths if NONE is selected, will be: Used to check against the authorization policies. Forwarded to the backend application. Table 2.2. Configuration examples If your application... Choose... Relies on the proxy to do normalization BASE , MERGE_SLASHES or DECODE_AND_MERGE_SLASHES Normalizes request paths based on RFC 3986 and does not merge slashes. BASE Normalizes request paths based on RFC 3986 and merges slashes, but does not decode percent-encoded slashes. MERGE_SLASHES Normalizes request paths based on RFC 3986 , decodes percent-encoded slashes, and merges slashes. DECODE_AND_MERGE_SLASHES Processes request paths in a way that is incompatible with RFC 3986 . NONE 2.1.5.8.4. Configuring your SMCP for path normalization To configure path normalization for Red Hat OpenShift Service Mesh, specify the following in your ServiceMeshControlPlane . Use the configuration examples to help determine the settings for your system. SMCP v1 pathNormalization spec: global: pathNormalization: <option> 2.1.5.9. New features Red Hat OpenShift Service Mesh 1.1.13 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.10. New features Red Hat OpenShift Service Mesh 1.1.12 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.11. New features Red Hat OpenShift Service Mesh 1.1.11 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.12. New features Red Hat OpenShift Service Mesh 1.1.10 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.13. New features Red Hat OpenShift Service Mesh 1.1.9 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.14. New features Red Hat OpenShift Service Mesh 1.1.8 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.15. New features Red Hat OpenShift Service Mesh 1.1.7 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.16. New features Red Hat OpenShift Service Mesh 1.1.6 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.17. New features Red Hat OpenShift Service Mesh 1.1.5 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release also added support for configuring cipher suites. 2.1.5.18. New features Red Hat OpenShift Service Mesh 1.1.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. Note There are manual steps that must be completed to address CVE-2020-8663. 2.1.5.18.1. Manual updates required by CVE-2020-8663 The fix for CVE-2020-8663 : envoy: Resource exhaustion when accepting too many connections added a configurable limit on downstream connections. The configuration option for this limit must be configured to mitigate this vulnerability. Important These manual steps are required to mitigate this CVE whether you are using the 1.1 version or the 1.0 version of Red Hat OpenShift Service Mesh. This new configuration option is called overload.global_downstream_max_connections , and it is configurable as a proxy runtime setting. Perform the following steps to configure limits at the Ingress Gateway. Procedure Create a file named bootstrap-override.json with the following text to force the proxy to override the bootstrap template and load runtime configuration from disk: Create a secret from the bootstrap-override.json file, replacing <SMCPnamespace> with the namespace where you created the service mesh control plane (SMCP): USD oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json Update the SMCP configuration to activate the override. Updated SMCP configuration example #1 apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap To set the new configuration option, create a secret that has the desired value for the overload.global_downstream_max_connections setting. The following example uses a value of 10000 : USD oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000 Update the SMCP again to mount the secret in the location where Envoy is looking for runtime configuration: Updated SMCP configuration example #2 apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to "v1.0" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings 2.1.5.18.2. Upgrading from Elasticsearch 5 to Elasticsearch 6 When updating from Elasticsearch 5 to Elasticsearch 6, you must delete your Jaeger instance, then recreate the Jaeger instance because of an issue with certificates. Re-creating the Jaeger instance triggers creating a new set of certificates. If you are using persistent storage the same volumes can be mounted for the new Jaeger instance as long as the Jaeger name and namespace for the new Jaeger instance are the same as the deleted Jaeger instance. Procedure if Jaeger is installed as part of Red Hat Service Mesh Determine the name of your Jaeger custom resource file: USD oc get jaeger -n istio-system You should see something like the following: NAME AGE jaeger 3d21h Copy the generated custom resource file into a temporary directory: USD oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml Delete the Jaeger instance: USD oc delete jaeger jaeger -n istio-system Recreate the Jaeger instance from your copy of the custom resource file: USD oc create -f /tmp/jaeger-cr.yaml -n istio-system Delete the copy of the generated custom resource file: USD rm /tmp/jaeger-cr.yaml Procedure if Jaeger not installed as part of Red Hat Service Mesh Before you begin, create a copy of your Jaeger custom resource file. Delete the Jaeger instance by deleting the custom resource file: USD oc delete -f <jaeger-cr-file> For example: USD oc delete -f jaeger-prod-elasticsearch.yaml Recreate your Jaeger instance from the backup copy of your custom resource file: USD oc create -f <jaeger-cr-file> Validate that your Pods have restarted: USD oc get pods -n jaeger-system -w 2.1.5.19. New features Red Hat OpenShift Service Mesh 1.1.3 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.20. New features Red Hat OpenShift Service Mesh 1.1.2 This release of Red Hat OpenShift Service Mesh addresses a security vulnerability. 2.1.5.21. New features Red Hat OpenShift Service Mesh 1.1.1 This release of Red Hat OpenShift Service Mesh adds support for a disconnected installation. 2.1.5.22. New features Red Hat OpenShift Service Mesh 1.1.0 This release of Red Hat OpenShift Service Mesh adds support for Istio 1.4.6 and Jaeger 1.17.1. 2.1.5.22.1. Manual updates from 1.0 to 1.1 If you are updating from Red Hat OpenShift Service Mesh 1.0 to 1.1, you must update the ServiceMeshControlPlane resource to update the control plane components to the new version. In the web console, click the Red Hat OpenShift Service Mesh Operator. Click the Project menu and choose the project where your ServiceMeshControlPlane is deployed from the list, for example istio-system . Click the name of your control plane, for example basic-install . Click YAML and add a version field to the spec: of your ServiceMeshControlPlane resource. For example, to update to Red Hat OpenShift Service Mesh 1.1.0, add version: v1.1 . The version field specifies the version of Service Mesh to install and defaults to the latest available version. Note Note that support for Red Hat OpenShift Service Mesh v1.0 ended in October, 2020. You must upgrade to either v1.1 or v2.0. 2.1.6. Deprecated features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. 2.1.6.1. Deprecated features Red Hat OpenShift Service Mesh 1.1.5 The following custom resources were deprecated in release 1.1.5 and were removed in release 1.1.12 Policy - The Policy resource is deprecated and will be replaced by the PeerAuthentication resource in a future release. MeshPolicy - The MeshPolicy resource is deprecated and will be replaced by the PeerAuthentication resource in a future release. v1alpha1 RBAC API -The v1alpha1 RBAC policy is deprecated by the v1beta1 AuthorizationPolicy . RBAC (Role Based Access Control) defines ServiceRole and ServiceRoleBinding objects. ServiceRole ServiceRoleBinding RbacConfig - RbacConfig implements the Custom Resource Definition for controlling Istio RBAC behavior. ClusterRbacConfig (versions prior to Red Hat OpenShift Service Mesh 1.0) ServiceMeshRbacConfig (Red Hat OpenShift Service Mesh version 1.0 and later) In Kiali, the login and LDAP strategies are deprecated. A future version will introduce authentication using OpenID providers. The following components are also deprecated in this release and will be replaced by the Istiod component in a future release. Mixer - access control and usage policies Pilot - service discovery and proxy configuration Citadel - certificate generation Galley - configuration validation and distribution 2.1.7. Known issues These limitations exist in Red Hat OpenShift Service Mesh: Red Hat OpenShift Service Mesh does not support IPv6 , as it is not supported by the upstream Istio project, nor fully supported by OpenShift Container Platform. Graph layout - The layout for the Kiali graph can render differently, depending on your application architecture and the data to display (number of graph nodes and their interactions). Because it is difficult if not impossible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. To choose a different layout, you can choose a different Layout Schema from the Graph Settings menu. The first time you access related services such as Jaeger and Grafana, from the Kiali console, you must accept the certificate and re-authenticate using your OpenShift Container Platform login credentials. This happens due to an issue with how the framework displays embedded pages in the console. 2.1.7.1. Service Mesh known issues These are the known issues in Red Hat OpenShift Service Mesh: Jaeger/Kiali Operator upgrade blocked with operator pending When upgrading the Jaeger or Kiali Operators with Service Mesh 1.0.x installed, the operator status shows as Pending. Workaround: See the linked Knowledge Base article for more information. Istio-14743 Due to limitations in the version of Istio that this release of Red Hat OpenShift Service Mesh is based on, there are several applications that are currently incompatible with Service Mesh. See the linked community issue for details. MAISTRA-858 The following Envoy log messages describing deprecated options and configurations associated with Istio 1.1.x are expected: [2019-06-03 07:03:28.943][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.listener.Filter.config'. This configuration will be removed from Envoy soon. [2019-08-12 22:12:59.001][13][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. MAISTRA-806 Evicted Istio Operator Pod causes mesh and CNI not to deploy. Workaround: If the istio-operator pod is evicted while deploying the control pane, delete the evicted istio-operator pod. MAISTRA-681 When the control plane has many namespaces, it can lead to performance issues. MAISTRA-465 The Maistra Operator fails to create a service for operator metrics. MAISTRA-453 If you create a new project and deploy pods immediately, sidecar injection does not occur. The operator fails to add the maistra.io/member-of before the pods are created, therefore the pods must be deleted and recreated for sidecar injection to occur. MAISTRA-158 Applying multiple gateways referencing the same hostname will cause all gateways to stop functioning. 2.1.7.2. Kiali known issues Note New issues for Kiali should be created in the OpenShift Service Mesh project with the Component set to Kiali . These are the known issues in Kiali: KIALI-2206 When you are accessing the Kiali console for the first time, and there is no cached browser data for Kiali, the "View in Grafana" link on the Metrics tab of the Kiali Service Details page redirects to the wrong location. The only way you would encounter this issue is if you are accessing Kiali for the first time. KIALI-507 Kiali does not support Internet Explorer 11. This is because the underlying frameworks do not support Internet Explorer. To access the Kiali console, use one of the two most recent versions of the Chrome, Edge, Firefox or Safari browser. 2.1.8. Fixed issues The following issues been resolved in the current release: 2.1.8.1. Service Mesh fixed issues MAISTRA-2371 Handle tombstones in listerInformer. The updated cache codebase was not handling tombstones when translating the events from the namespace caches to the aggregated cache, leading to a panic in the go routine. OSSM-542 Galley is not using the new certificate after rotation. OSSM-99 Workloads generated from direct pod without labels may crash Kiali. OSSM-93 IstioConfigList can't filter by two or more names. OSSM-92 Cancelling unsaved changes on the VS/DR YAML edit page does not cancel the changes. OSSM-90 Traces not available on the service details page. MAISTRA-1649 Headless services conflict when in different namespaces. When deploying headless services within different namespaces the endpoint configuration is merged and results in invalid Envoy configurations being pushed to the sidecars. MAISTRA-1541 Panic in kubernetesenv when the controller is not set on owner reference. If a pod has an ownerReference which does not specify the controller, this will cause a panic within the kubernetesenv cache.go code. MAISTRA-1352 Cert-manager Custom Resource Definitions (CRD) from the control plane installation have been removed for this release and future releases. If you have already installed Red Hat OpenShift Service Mesh, the CRDs must be removed manually if cert-manager is not being used. MAISTRA-1001 Closing HTTP/2 connections could lead to segmentation faults in istio-proxy . MAISTRA-932 Added the requires metadata to add dependency relationship between Jaeger Operator and OpenShift Elasticsearch Operator. Ensures that when the Jaeger Operator is installed, it automatically deploys the OpenShift Elasticsearch Operator if it is not available. MAISTRA-862 Galley dropped watches and stopped providing configuration to other components after many namespace deletions and re-creations. MAISTRA-833 Pilot stopped delivering configuration after many namespace deletions and re-creations. MAISTRA-684 The default Jaeger version in the istio-operator is 1.12.0, which does not match Jaeger version 1.13.1 that shipped in Red Hat OpenShift Service Mesh 0.12.TechPreview. MAISTRA-622 In Maistra 0.12.0/TP12, permissive mode does not work. The user has the option to use Plain text mode or Mutual TLS mode, but not permissive. MAISTRA-572 Jaeger cannot be used with Kiali. In this release Jaeger is configured to use the OAuth proxy, but is also only configured to work through a browser and does not allow service access. Kiali cannot properly communicate with the Jaeger endpoint and it considers Jaeger to be disabled. See also TRACING-591 . MAISTRA-357 In OpenShift 4 Beta on AWS, it is not possible, by default, to access a TCP or HTTPS service through the ingress gateway on a port other than port 80. The AWS load balancer has a health check that verifies if port 80 on the service endpoint is active. Without a service running on port 80, the load balancer health check fails. MAISTRA-348 OpenShift 4 Beta on AWS does not support ingress gateway traffic on ports other than 80 or 443. If you configure your ingress gateway to handle TCP traffic with a port number other than 80 or 443, you have to use the service hostname provided by the AWS load balancer rather than the OpenShift router as a workaround. MAISTRA-193 Unexpected console info messages are visible when health checking is enabled for citadel. Bug 1821432 Toggle controls in OpenShift Container Platform Control Resource details page do not update the CR correctly. UI Toggle controls in the Service Mesh Control Plane (SMCP) Overview page in the OpenShift Container Platform web console sometimes update the wrong field in the resource. To update a ServiceMeshControlPlane resource, edit the YAML content directly or update the resource from the command line instead of clicking the toggle controls. 2.1.8.2. Kiali fixed issues KIALI-3239 If a Kiali Operator pod has failed with a status of "Evicted" it blocks the Kiali operator from deploying. The workaround is to delete the Evicted pod and redeploy the Kiali operator. KIALI-3118 After changes to the ServiceMeshMemberRoll, for example adding or removing projects, the Kiali pod restarts and then displays errors on the Graph page while the Kiali pod is restarting. KIALI-3096 Runtime metrics fail in Service Mesh. There is an OAuth filter between the Service Mesh and Prometheus, requiring a bearer token to be passed to Prometheus before access is granted. Kiali has been updated to use this token when communicating to the Prometheus server, but the application metrics are currently failing with 403 errors. KIALI-3070 This bug only affects custom dashboards, not the default dashboards. When you select labels in metrics settings and refresh the page, your selections are retained in the menu but your selections are not displayed on the charts. KIALI-2686 When the control plane has many namespaces, it can lead to performance issues. 2.2. Understanding Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Red Hat OpenShift Service Mesh provides a platform for behavioral insight and operational control over your networked microservices in a service mesh. With Red Hat OpenShift Service Mesh, you can connect, secure, and monitor microservices in your OpenShift Container Platform environment. 2.2.1. What is Red Hat OpenShift Service Mesh? A service mesh is the network of microservices that make up applications in a distributed microservice architecture and the interactions between those microservices. When a Service Mesh grows in size and complexity, it can become harder to understand and manage. Based on the open source Istio project, Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the service code. You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy to relevant services in the mesh that intercepts all network communication between microservices. You configure and manage the Service Mesh using the Service Mesh control plane features. Red Hat OpenShift Service Mesh gives you an easy way to create a network of deployed services that provide: Discovery Load balancing Service-to-service authentication Failure recovery Metrics Monitoring Red Hat OpenShift Service Mesh also provides more complex operational functions including: A/B testing Canary releases Access control End-to-end authentication 2.2.2. Red Hat OpenShift Service Mesh Architecture Red Hat OpenShift Service Mesh is logically split into a data plane and a control plane: The data plane is a set of intelligent proxies deployed as sidecars. These proxies intercept and control all inbound and outbound network communication between microservices in the service mesh. Sidecar proxies also communicate with Mixer, the general-purpose policy and telemetry hub. Envoy proxy intercepts all inbound and outbound traffic for all services in the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod. The control plane manages and configures proxies to route traffic, and configures Mixers to enforce policies and collect telemetry. Mixer enforces access control and usage policies (such as authorization, rate limits, quotas, authentication, and request tracing) and collects telemetry data from the Envoy proxy and other services. Pilot configures the proxies at runtime. Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (for example, A/B tests or canary deployments), and resiliency (timeouts, retries, and circuit breakers). Citadel issues and rotates certificates. Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Operators can enforce policies based on service identity rather than on network controls using Citadel. Galley ingests the service mesh configuration, then validates, processes, and distributes the configuration. Galley protects the other service mesh components from obtaining user configuration details from OpenShift Container Platform. Red Hat OpenShift Service Mesh also uses the istio-operator to manage the installation of the control plane. An Operator is a piece of software that enables you to implement and automate common activities in your OpenShift Container Platform cluster. It acts as a controller, allowing you to set or change the desired state of objects in your cluster. 2.2.3. Understanding Kiali Kiali provides visibility into your service mesh by showing you the microservices in your service mesh, and how they are connected. 2.2.3.1. Kiali overview Kiali provides observability into the Service Mesh running on OpenShift Container Platform. Kiali helps you define, validate, and observe your Istio service mesh. It helps you to understand the structure of your service mesh by inferring the topology, and also provides information about the health of your service mesh. Kiali provides an interactive graph view of your namespace in real time that provides visibility into features like circuit breakers, request rates, latency, and even graphs of traffic flows. Kiali offers insights about components at different levels, from Applications to Services and Workloads, and can display the interactions with contextual information and charts on the selected graph node or edge. Kiali also provides the ability to validate your Istio configurations, such as gateways, destination rules, virtual services, mesh policies, and more. Kiali provides detailed metrics, and a basic Grafana integration is available for advanced queries. Distributed tracing is provided by integrating Jaeger into the Kiali console. Kiali is installed by default as part of the Red Hat OpenShift Service Mesh. 2.2.3.2. Kiali architecture Kiali is based on the open source Kiali project . Kiali is composed of two components: the Kiali application and the Kiali console. Kiali application (back end) - This component runs in the container application platform and communicates with the service mesh components, retrieves and processes data, and exposes this data to the console. The Kiali application does not need storage. When deploying the application to a cluster, configurations are set in ConfigMaps and secrets. Kiali console (front end) - The Kiali console is a web application. The Kiali application serves the Kiali console, which then queries the back end for data to present it to the user. In addition, Kiali depends on external services and components provided by the container application platform and Istio. Red Hat Service Mesh (Istio) - Istio is a Kiali requirement. Istio is the component that provides and controls the service mesh. Although Kiali and Istio can be installed separately, Kiali depends on Istio and will not work if it is not present. Kiali needs to retrieve Istio data and configurations, which are exposed through Prometheus and the cluster API. Prometheus - A dedicated Prometheus instance is included as part of the Red Hat OpenShift Service Mesh installation. When Istio telemetry is enabled, metrics data are stored in Prometheus. Kiali uses this Prometheus data to determine the mesh topology, display metrics, calculate health, show possible problems, and so on. Kiali communicates directly with Prometheus and assumes the data schema used by Istio Telemetry. Prometheus is an Istio dependency and a hard dependency for Kiali, and many of Kiali's features will not work without Prometheus. Cluster API - Kiali uses the API of the OpenShift Container Platform (cluster API) to fetch and resolve service mesh configurations. Kiali queries the cluster API to retrieve, for example, definitions for namespaces, services, deployments, pods, and other entities. Kiali also makes queries to resolve relationships between the different cluster entities. The cluster API is also queried to retrieve Istio configurations like virtual services, destination rules, route rules, gateways, quotas, and so on. Jaeger - Jaeger is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When you install the distributed tracing platform (Jaeger) as part of the default Red Hat OpenShift Service Mesh installation, the Kiali console includes a tab to display distributed tracing data. Note that tracing data will not be available if you disable Istio's distributed tracing feature. Also note that user must have access to the namespace where the Service Mesh control plane is installed to view tracing data. Grafana - Grafana is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When available, the metrics pages of Kiali display links to direct the user to the same metric in Grafana. Note that user must have access to the namespace where the Service Mesh control plane is installed to view links to the Grafana dashboard and view Grafana data. 2.2.3.3. Kiali features The Kiali console is integrated with Red Hat Service Mesh and provides the following capabilities: Health - Quickly identify issues with applications, services, or workloads. Topology - Visualize how your applications, services, or workloads communicate via the Kiali graph. Metrics - Predefined metrics dashboards let you chart service mesh and application performance for Go, Node.js. Quarkus, Spring Boot, Thorntail and Vert.x. You can also create your own custom dashboards. Tracing - Integration with Jaeger lets you follow the path of a request through various microservices that make up an application. Validations - Perform advanced validations on the most common Istio objects (Destination Rules, Service Entries, Virtual Services, and so on). Configuration - Optional ability to create, update and delete Istio routing configuration using wizards or directly in the YAML editor in the Kiali Console. 2.2.4. Understanding Jaeger Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. The path of this request is a distributed transaction. Jaeger lets you perform distributed tracing, which follows the path of a request through various microservices that make up an application. Distributed tracing is a technique that is used to tie the information about different units of work together-usually executed in different processes or hosts-to understand a whole chain of events in a distributed transaction. Distributed tracing lets developers visualize call flows in large service oriented architectures. It can be invaluable in understanding serialization, parallelism, and sources of latency. Jaeger records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace is comprised of one or more spans. A span represents a logical unit of work in Jaeger that has an operation name, the start time of the operation, and the duration. Spans may be nested and ordered to model causal relationships. 2.2.4.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis 2.2.4.2. Distributed tracing architecture The distributed tracing platform (Jaeger) is based on the open source Jaeger project . The distributed tracing platform (Jaeger) is made up of several components that work together to collect, store, and display tracing data. Jaeger Client (Tracer, Reporter, instrumented application, client libraries)- Jaeger clients are language specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. Jaeger Agent (Server Queue, Processor Workers) - The Jaeger agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments like Kubernetes. Jaeger Collector (Queue, Workers) - Similar to the Agent, the Collector is able to receive spans and place them in an internal queue for processing. This allows the collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. Storage (Data Store) - Collectors require a persistent storage backend. Jaeger has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch. Query (Query Service) - Query is a service that retrieves traces from storage. Ingester (Ingester Service) - Jaeger can use Apache Kafka as a buffer between the collector and the actual backing storage (Elasticsearch). Ingester is a service that reads data from Kafka and writes to another storage backend (Elasticsearch). Jaeger Console - Jaeger provides a user interface that lets you visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. 2.2.4.3. Red Hat OpenShift distributed tracing platform features Red Hat OpenShift distributed tracing platform provides the following capabilities: Integration with Kiali - When properly configured, you can view distributed tracing platform data from the Kiali console. High scalability - The distributed tracing platform back end is designed to have no single points of failure and to scale with the business needs. Distributed Context Propagation - Enables you to connect data from different components together to create a complete end-to-end trace. Backwards compatibility with Zipkin - Red Hat OpenShift distributed tracing platform has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release. 2.2.5. steps Prepare to install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 2.3. Service Mesh and Istio differences Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . An installation of Red Hat OpenShift Service Mesh differs from upstream Istio community installations in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. The current release of Red Hat OpenShift Service Mesh differs from the current upstream Istio community release in the following ways: 2.3.1. Multitenant installations Whereas upstream Istio takes a single tenant approach, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. Red Hat OpenShift Service Mesh uses a multitenant operator to manage the control plane lifecycle. Red Hat OpenShift Service Mesh installs a multitenant control plane by default. You specify the projects that can access the Service Mesh, and isolate the Service Mesh from other control plane instances. 2.3.1.1. Multitenancy versus cluster-wide installations The main difference between a multitenant installation and a cluster-wide installation is the scope of privileges used by istod. The components no longer use cluster-scoped Role Based Access Control (RBAC) resource ClusterRoleBinding . Every project in the ServiceMeshMemberRoll members list will have a RoleBinding for each service account associated with the control plane deployment and each control plane deployment will only watch those member projects. Each member project has a maistra.io/member-of label added to it, where the member-of value is the project containing the control plane installation. Red Hat OpenShift Service Mesh configures each member project to ensure network access between itself, the control plane, and other member projects. The exact configuration differs depending on how OpenShift Container Platform software-defined networking (SDN) is configured. See About OpenShift SDN for additional details. If the OpenShift Container Platform cluster is configured to use the SDN plugin: NetworkPolicy : Red Hat OpenShift Service Mesh creates a NetworkPolicy resource in each member project allowing ingress to all pods from the other members and the control plane. If you remove a member from Service Mesh, this NetworkPolicy resource is deleted from the project. Note This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a NetworkPolicy to allow that traffic through. Multitenant : Red Hat OpenShift Service Mesh joins the NetNamespace for each member project to the NetNamespace of the control plane project (the equivalent of running oc adm pod-network join-projects --to control-plane-project member-project ). If you remove a member from the Service Mesh, its NetNamespace is isolated from the control plane (the equivalent of running oc adm pod-network isolate-projects member-project ). Subnet : No additional configuration is performed. 2.3.1.2. Cluster scoped resources Upstream Istio has two cluster scoped resources that it relies on. The MeshPolicy and the ClusterRbacConfig . These are not compatible with a multitenant cluster and have been replaced as described below. ServiceMeshPolicy replaces MeshPolicy for configuration of control-plane-wide authentication policies. This must be created in the same project as the control plane. ServicemeshRbacConfig replaces ClusterRbacConfig for configuration of control-plane-wide role based access control. This must be created in the same project as the control plane. 2.3.2. Differences between Istio and Red Hat OpenShift Service Mesh An installation of Red Hat OpenShift Service Mesh differs from an installation of Istio in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. 2.3.2.1. Command line tool The command line tool for Red Hat OpenShift Service Mesh is oc. Red Hat OpenShift Service Mesh does not support istioctl. 2.3.2.2. Automatic injection The upstream Istio community installation automatically injects the sidecar into pods within the projects you have labeled. Red Hat OpenShift Service Mesh does not automatically inject the sidecar to any pods, but requires you to opt in to injection using an annotation without labeling projects. This method requires fewer privileges and does not conflict with other OpenShift capabilities such as builder pods. To enable automatic injection you specify the sidecar.istio.io/inject annotation as described in the Automatic sidecar injection section. 2.3.2.3. Istio Role Based Access Control features Istio Role Based Access Control (RBAC) provides a mechanism you can use to control access to a service. You can identify subjects by user name or by specifying a set of properties and apply access controls accordingly. The upstream Istio community installation includes options to perform exact header matches, match wildcards in headers, or check for a header containing a specific prefix or suffix. Red Hat OpenShift Service Mesh extends the ability to match request headers by using a regular expression. Specify a property key of request.regex.headers with a regular expression. Upstream Istio community matching request headers example apiVersion: "rbac.istio.io/v1alpha1" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account" properties: request.headers[<header>]: "value" Red Hat OpenShift Service Mesh matching request headers by using regular expressions apiVersion: "rbac.istio.io/v1alpha1" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account" properties: request.regex.headers[<header>]: "<regular expression>" 2.3.2.4. OpenSSL Red Hat OpenShift Service Mesh replaces BoringSSL with OpenSSL. OpenSSL is a software library that contains an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. The Red Hat OpenShift Service Mesh Proxy binary dynamically links the OpenSSL libraries (libssl and libcrypto) from the underlying Red Hat Enterprise Linux operating system. 2.3.2.5. Component modifications A maistra-version label has been added to all resources. All Ingress resources have been converted to OpenShift Route resources. Grafana, Tracing (Jaeger), and Kiali are enabled by default and exposed through OpenShift routes. Godebug has been removed from all templates The istio-multi ServiceAccount and ClusterRoleBinding have been removed, as well as the istio-reader ClusterRole. 2.3.2.6. Envoy, Secret Discovery Service, and certificates Red Hat OpenShift Service Mesh does not support QUIC-based services. Deployment of TLS certificates using the Secret Discovery Service (SDS) functionality of Istio is not currently supported in Red Hat OpenShift Service Mesh. The Istio implementation depends on a nodeagent container that uses hostPath mounts. 2.3.2.7. Istio Container Network Interface (CNI) plugin Red Hat OpenShift Service Mesh includes CNI plugin, which provides you with an alternate way to configure application pod networking. The CNI plugin replaces the init-container network configuration eliminating the need to grant service accounts and projects access to Security Context Constraints (SCCs) with elevated privileges. 2.3.2.8. Routes for Istio Gateways OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. For more information, see Automatic route creation. 2.3.2.8.1. Catch-all domains Catch-all domains ("*") are not supported. If one is found in the Gateway definition, Red Hat OpenShift Service Mesh will create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will not be a catch all ("*") route, instead it will have a hostname in the form <route-name>[-<project>].<suffix> . See the OpenShift documentation for more information about how default hostnames work and how a cluster administrator can customize it. 2.3.2.8.2. Subdomains Subdomains (e.g.: "*.domain.com") are supported. However this ability doesn't come enabled by default in OpenShift Container Platform. This means that Red Hat OpenShift Service Mesh will create the route with the subdomain, but it will only be in effect if OpenShift Container Platform is configured to enable it. 2.3.2.8.3. Transport layer security Transport Layer Security (TLS) is supported. This means that, if the Gateway contains a tls section, the OpenShift Route will be configured to support TLS. Additional resources Automatic route creation 2.3.3. Kiali and service mesh Installing Kiali via the Service Mesh on OpenShift Container Platform differs from community Kiali installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Kiali has been enabled by default. Ingress has been enabled by default. Updates have been made to the Kiali ConfigMap. Updates have been made to the ClusterRole settings for Kiali. Do not edit the ConfigMap, because your changes might be overwritten by the Service Mesh or Kiali Operators. Files that the Kiali Operator manages have a kiali.io/ label or annotation. Updating the Operator files should be restricted to those users with cluster-admin privileges. If you use Red Hat OpenShift Dedicated, updating the Operator files should be restricted to those users with dedicated-admin privileges. 2.3.4. Distributed tracing and service mesh Installing the distributed tracing platform (Jaeger) with the Service Mesh on OpenShift Container Platform differs from community Jaeger installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Distributed tracing has been enabled by default for Service Mesh. Ingress has been enabled by default for Service Mesh. The name for the Zipkin port name has changed to jaeger-collector-zipkin (from http ) Jaeger uses Elasticsearch for storage by default when you select either the production or streaming deployment option. The community version of Istio provides a generic "tracing" route. Red Hat OpenShift Service Mesh uses a "jaeger" route that is installed by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator and is already protected by OAuth. Red Hat OpenShift Service Mesh uses a sidecar for the Envoy proxy, and Jaeger also uses a sidecar, for the Jaeger agent. These two sidecars are configured separately and should not be confused with each other. The proxy sidecar creates spans related to the pod's ingress and egress traffic. The agent sidecar receives the spans emitted by the application and sends them to the Jaeger Collector. 2.4. Preparing to install Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Before you can install Red Hat OpenShift Service Mesh, review the installation activities, ensure that you meet the prerequisites: 2.4.1. Prerequisites Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information. Review the OpenShift Container Platform 4.12 overview . Install OpenShift Container Platform 4.12. Install OpenShift Container Platform 4.12 on AWS Install OpenShift Container Platform 4.12 on user-provisioned AWS Install OpenShift Container Platform 4.12 on bare metal Install OpenShift Container Platform 4.12 on vSphere Note If you are installing Red Hat OpenShift Service Mesh on a restricted network , follow the instructions for your chosen OpenShift Container Platform infrastructure. Install the version of the OpenShift Container Platform command line utility (the oc client tool) that matches your OpenShift Container Platform version and add it to your path. If you are using OpenShift Container Platform 4.12, see About the OpenShift CLI . 2.4.2. Red Hat OpenShift Service Mesh supported configurations The following are the only supported configurations for the Red Hat OpenShift Service Mesh: OpenShift Container Platform version 4.6 or later. Note OpenShift Online and Red Hat OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh. The deployment must be contained within a single OpenShift Container Platform cluster that is not federated. This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64. This release only supports configurations where all Service Mesh components are contained in the OpenShift Container Platform cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario. This release only supports configurations that do not integrate external services such as virtual machines. For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the Support Policy . 2.4.2.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers. 2.4.2.2. Supported Mixer adapters This release only supports the following Mixer adapter: 3scale Istio Adapter 2.4.3. Service Mesh Operators overview Red Hat OpenShift Service Mesh requires the use of the Red Hat OpenShift Service Mesh Operator which allows you to connect, secure, control, and observe the microservices that comprise your applications. You can also install other Operators to enhance your service mesh experience. Warning Do not install Community versions of the Operators. Community Operators are not supported. The following Operator is required: Red Hat OpenShift Service Mesh Operator Allows you to connect, secure, control, and observe the microservices that comprise your applications. It also defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project. The following Operators are optional: Kiali Operator provided by Red Hat Provides observability for your service mesh. You can view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project. Red Hat OpenShift distributed tracing platform (Tempo) Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Grafana Tempo project. The following optional Operators are deprecated: Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator are deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for these features during the current release lifecycle, but these features will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. Red Hat OpenShift distributed tracing platform (Jaeger) Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project. OpenShift Elasticsearch Operator Provides database storage for tracing and logging with the distributed tracing platform (Jaeger). It is based on the open source Elasticsearch project. Warning See Configuring the Elasticsearch log store for details on configuring the default Jaeger parameters for Elasticsearch in a production environment. 2.4.4. steps Install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 2.5. Installing Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Installing the Service Mesh involves installing the OpenShift Elasticsearch, Jaeger, Kiali and Service Mesh Operators, creating and managing a ServiceMeshControlPlane resource to deploy the control plane, and creating a ServiceMeshMemberRoll resource to specify the namespaces associated with the Service Mesh. Note Mixer's policy enforcement is disabled by default. You must enable it to run policy tasks. See Update Mixer policy enforcement for instructions on enabling Mixer policy enforcement. Note Multi-tenant control plane installations are the default configuration. Note The Service Mesh documentation uses istio-system as the example project, but you can deploy the service mesh to any project. 2.5.1. Prerequisites Follow the Preparing to install Red Hat OpenShift Service Mesh process. An account with the cluster-admin role. The Service Mesh installation process uses the OperatorHub to install the ServiceMeshControlPlane custom resource definition within the openshift-operators project. The Red Hat OpenShift Service Mesh defines and monitors the ServiceMeshControlPlane related to the deployment, update, and deletion of the control plane. Starting with Red Hat OpenShift Service Mesh 1.1.18.2, you must install the OpenShift Elasticsearch Operator, the Jaeger Operator, and the Kiali Operator before the Red Hat OpenShift Service Mesh Operator can install the control plane. 2.5.2. Installing the OpenShift Elasticsearch Operator The default Red Hat OpenShift distributed tracing platform (Jaeger) deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Red Hat OpenShift distributed tracing platform, giving demonstrations, or using Red Hat OpenShift distributed tracing platform (Jaeger) in a test environment. If you plan to use Red Hat OpenShift distributed tracing platform (Jaeger) in production, you must install and configure a persistent storage option, in this case, Elasticsearch. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Warning Do not install Community versions of the Operators. Community Operators are not supported. Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type Elasticsearch into the filter box to locate the OpenShift Elasticsearch Operator. Click the OpenShift Elasticsearch Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators-redhat project and makes the Operator available to all projects in the cluster. Note The Elasticsearch installation requires the openshift-operators-redhat namespace for the OpenShift Elasticsearch Operator. The other Red Hat OpenShift distributed tracing platform Operators are installed in the openshift-operators namespace. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . On the Installed Operators page, select the openshift-operators-redhat project. Wait for the InstallSucceeded status of the OpenShift Elasticsearch Operator before continuing. 2.5.3. Installing the Red Hat OpenShift distributed tracing platform Operator You can install the Red Hat OpenShift distributed tracing platform Operator through the OperatorHub . By default, the Operator is installed in the openshift-operators project. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. If you require persistent storage, you must install the OpenShift Elasticsearch Operator before installing the Red Hat OpenShift distributed tracing platform Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Search for the Red Hat OpenShift distributed tracing platform Operator by entering distributed tracing platform in the search field. Select the Red Hat OpenShift distributed tracing platform Operator, which is provided by Red Hat , to display information about the Operator. Click Install . For the Update channel on the Install Operator page, select stable to automatically update the Operator when new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Accept the default Automatic approval strategy. Note If you accept this default, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of this Operator when a new version of the Operator becomes available. If you select Manual updates, the OLM creates an update request when a new version of the Operator becomes available. To update the Operator to the new version, you must then manually approve the update request as a cluster administrator. The Manual approval strategy requires a cluster administrator to manually approve Operator installation and subscription. Click Install . Navigate to Operators Installed Operators . On the Installed Operators page, select the openshift-operators project. Wait for the Succeeded status of the Red Hat OpenShift distributed tracing platform Operator before continuing. 2.5.4. Installing the Kiali Operator You must install the Kiali Operator for the Red Hat OpenShift Service Mesh Operator to install the Service Mesh control plane. Warning Do not install Community versions of the Operators. Community Operators are not supported. Prerequisites Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Type Kiali into the filter box to find the Kiali Operator. Click the Kiali Operator provided by Red Hat to display information about the Operator. Click Install . On the Operator Installation page, select the stable Update Channel. Select All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Select the Automatic Approval Strategy. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . The Installed Operators page displays the Kiali Operator's installation progress. 2.5.5. Installing the Operators To install Red Hat OpenShift Service Mesh, you must install the Red Hat OpenShift Service Mesh Operator. Repeat the procedure for each additional Operator you want to install. Additional Operators include: Kiali Operator provided by Red Hat Tempo Operator Deprecated additional Operators include: Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator are deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for these features during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. Red Hat OpenShift distributed tracing platform (Jaeger) OpenShift Elasticsearch Operator Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. In the OpenShift Container Platform web console, click Operators OperatorHub . Type the name of the Operator into the filter box and select the Red Hat version of the Operator. Community versions of the Operators are not supported. Click Install . On the Install Operator page for each Operator, accept the default settings. Click Install . Wait until the Operator installs before repeating the steps for the Operator you want to install. The Red Hat OpenShift Service Mesh Operator installs in the openshift-operators namespace and is available for all namespaces in the cluster. The Kiali Operator provided by Red Hat installs in the openshift-operators namespace and is available for all namespaces in the cluster. The Tempo Operator installs in the openshift-tempo-operator namespace and is available for all namespaces in the cluster. The Red Hat OpenShift distributed tracing platform (Jaeger) installs in the openshift-distributed-tracing namespace and is available for all namespaces in the cluster. Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) is deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. The OpenShift Elasticsearch Operator installs in the openshift-operators-redhat namespace and is available for all namespaces in the cluster. Important Starting with Red Hat OpenShift Service Mesh 2.5, OpenShift Elasticsearch Operator is deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. Verification After all you have installed all four Operators, click Operators Installed Operators to verify that your Operators are installed. 2.5.6. Deploying the Red Hat OpenShift Service Mesh control plane The ServiceMeshControlPlane resource defines the configuration to be used during installation. You can deploy the default configuration provided by Red Hat or customize the ServiceMeshControlPlane file to fit your business needs. You can deploy the Service Mesh control plane by using the OpenShift Container Platform web console or from the command line using the oc client tool. 2.5.6.1. Deploying the control plane from the web console Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane by using the web console. In this example, istio-system is the name of the control plane project. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation. An account with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a project named istio-system . Navigate to Home Projects . Click Create Project . Enter istio-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select istio-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift Service Mesh Operator. Under Provided APIs , the Operator provides links to create two resource types: A ServiceMeshControlPlane resource A ServiceMeshMemberRoll resource Under Istio Service Mesh Control Plane click Create ServiceMeshControlPlane . On the Create Service Mesh Control Plane page, modify the YAML for the default ServiceMeshControlPlane template as needed. Note For additional information about customizing the control plane, see customizing the Red Hat OpenShift Service Mesh installation. For production, you must change the default Jaeger template. Click Create to create the control plane. The Operator creates pods, services, and Service Mesh control plane components based on your configuration parameters. Click the Istio Service Mesh Control Plane tab. Click the name of the new control plane. Click the Resources tab to see the Red Hat OpenShift Service Mesh control plane resources the Operator created and configured. 2.5.6.2. Deploying the control plane from the CLI Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane the command line. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation. An account with the cluster-admin role. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 Create a project named istio-system . USD oc new-project istio-system Create a ServiceMeshControlPlane file named istio-installation.yaml using the example found in "Customize the Red Hat OpenShift Service Mesh installation". You can customize the values as needed to match your use case. For production deployments you must change the default Jaeger template. Run the following command to deploy the control plane: USD oc create -n istio-system -f istio-installation.yaml Execute the following command to see the status of the control plane installation. USD oc get smcp -n istio-system The installation has finished successfully when the STATUS column is ComponentsReady . Run the following command to watch the progress of the Pods during the installation process: You should see output similar to the following: Example output NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h For a multitenant installation, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. You can create reusable configurations with ServiceMeshControlPlane templates. For more information, see Creating control plane templates . 2.5.7. Creating the Red Hat OpenShift Service Mesh member roll The ServiceMeshMemberRoll lists the projects that belong to the Service Mesh control plane. Only projects listed in the ServiceMeshMemberRoll are affected by the control plane. A project does not belong to a service mesh until you add it to the member roll for a particular control plane deployment. You must create a ServiceMeshMemberRoll resource named default in the same project as the ServiceMeshControlPlane , for example istio-system . 2.5.7.1. Creating the member roll from the web console You can add one or more projects to the Service Mesh member roll from the web console. In this example, istio-system is the name of the Service Mesh control plane project. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of existing projects to add to the service mesh. Procedure Log in to the OpenShift Container Platform web console. If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. Navigate to Home Projects . Enter a name in the Name field. Click Create . Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click Create ServiceMeshMemberRoll Click Members , then enter the name of your project in the Value field. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Click Create . 2.5.7.2. Creating the member roll from the CLI You can add a project to the ServiceMeshMemberRoll from the command line. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of projects to add to the service mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. USD oc new-project <your-project> To add your projects as members, modify the following example YAML. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. In this example, istio-system is the name of the Service Mesh control plane project. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name Run the following command to upload and create the ServiceMeshMemberRoll resource in the istio-system namespace. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system default The installation has finished successfully when the STATUS column is Configured . 2.5.8. Adding or removing projects from the service mesh You can add or remove projects from an existing Service Mesh ServiceMeshMemberRoll resource using the web console. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. The ServiceMeshMemberRoll resource is deleted when its corresponding ServiceMeshControlPlane resource is deleted. 2.5.8.1. Adding or removing projects from the member roll using the web console Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. Name of the project with the ServiceMeshMemberRoll resource. Names of the projects you want to add or remove from the mesh. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click the default link. Click the YAML tab. Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Click Save . Click Reload . 2.5.8.2. Adding or removing projects from the member roll using the CLI You can modify an existing Service Mesh member roll using the command line. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. Name of the project with the ServiceMeshMemberRoll resource. Names of the projects you want to add or remove from the mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. Edit the ServiceMeshMemberRoll resource. USD oc edit smmr -n <controlplane-namespace> Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name 2.5.9. Manual updates If you choose to update manually, the Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. OLM runs by default in OpenShift Container Platform. OLM uses CatalogSources, which use the Operator Registry API, to query for available Operators as well as upgrades for installed Operators. For more information about how OpenShift Container Platform handled upgrades, refer to the Operator Lifecycle Manager documentation. 2.5.9.1. Updating sidecar proxies In order to update the configuration for sidecar proxies the application administrator must restart the application pods. If your deployment uses automatic sidecar injection, you can update the pod template in the deployment by adding or modifying an annotation. Run the following command to redeploy the pods: USD oc patch deployment/<deployment> -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt": "'`date -Iseconds`'"}}}}}' If your deployment does not use automatic sidecar injection, you must manually update the sidecars by modifying the sidecar container image specified in the deployment or pod, and then restart the pods. 2.5.10. steps Prepare to deploy applications on Red Hat OpenShift Service Mesh. 2.6. Customizing security in a Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . If your service mesh application is constructed with a complex array of microservices, you can use Red Hat OpenShift Service Mesh to customize the security of the communication between those services. The infrastructure of OpenShift Container Platform along with the traffic management features of Service Mesh can help you manage the complexity of your applications and provide service and identity security for microservices. 2.6.1. Enabling mutual Transport Layer Security (mTLS) Mutual Transport Layer Security (mTLS) is a protocol where two parties authenticate each other. It is the default mode of authentication in some protocols (IKE, SSH) and optional in others (TLS). mTLS can be used without changes to the application or service code. The TLS is handled entirely by the service mesh infrastructure and between the two sidecar proxies. By default, Red Hat OpenShift Service Mesh is set to permissive mode, where the sidecars in Service Mesh accept both plain-text traffic and connections that are encrypted using mTLS. If a service in your mesh is communicating with a service outside the mesh, strict mTLS could break communication between those services. Use permissive mode while you migrate your workloads to Service Mesh. 2.6.1.1. Enabling strict mTLS across the mesh If your workloads do not communicate with services outside your mesh and communication will not be interrupted by only accepting encrypted connections, you can enable mTLS across your mesh quickly. Set spec.istio.global.mtls.enabled to true in your ServiceMeshControlPlane resource. The operator creates the required resources. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true 2.6.1.1.1. Configuring sidecars for incoming connections for specific services You can also configure mTLS for individual services or namespaces by creating a policy. apiVersion: "authentication.istio.io/v1alpha1" kind: "Policy" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {} 2.6.1.2. Configuring sidecars for outgoing connections Create a destination rule to configure Service Mesh to use mTLS when sending requests to other services in the mesh. apiVersion: "networking.istio.io/v1alpha3" kind: "DestinationRule" metadata: name: "default" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: "*.local" trafficPolicy: tls: mode: ISTIO_MUTUAL 2.6.1.3. Setting the minimum and maximum protocol versions If your environment has specific requirements for encrypted traffic in your service mesh, you can control the cryptographic functions that are allowed by setting the spec.security.controlPlane.tls.minProtocolVersion or spec.security.controlPlane.tls.maxProtocolVersion in your ServiceMeshControlPlane resource. Those values, configured in your control plane resource, define the minimum and maximum TLS version used by mesh components when communicating securely over TLS. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3 The default is TLS_AUTO and does not specify a version of TLS. Table 2.3. Valid values Value Description TLS_AUTO default TLSv1_0 TLS version 1.0 TLSv1_1 TLS version 1.1 TLSv1_2 TLS version 1.2 TLSv1_3 TLS version 1.3 2.6.2. Configuring cipher suites and ECDH curves Cipher suites and Elliptic-curve Diffie-Hellman (ECDH curves) can help you secure your service mesh. You can define a comma separated list of cipher suites using spec.istio.global.tls.cipherSuites and ECDH curves using spec.istio.global.tls.ecdhCurves in your ServiceMeshControlPlane resource. If either of these attributes are empty, then the default values are used. The cipherSuites setting is effective if your service mesh uses TLS 1.2 or earlier. It has no effect when negotiating with TLS 1.3. Set your cipher suites in the comma separated list in order of priority. For example, ecdhCurves: CurveP256, CurveP384 sets CurveP256 as a higher priority than CurveP384 . Note You must include either TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 or TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 when you configure the cipher suite. HTTP/2 support requires at least one of these cipher suites. The supported cipher suites are: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA The supported ECDH Curves are: CurveP256 CurveP384 CurveP521 X25519 2.6.3. Adding an external certificate authority key and certificate By default, Red Hat OpenShift Service Mesh generates self-signed root certificate and key, and uses them to sign the workload certificates. You can also use the user-defined certificate and key to sign workload certificates, with user-defined root certificate. This task demonstrates an example to plug certificates and key into Service Mesh. Prerequisites You must have installed Red Hat OpenShift Service Mesh with mutual TLS enabled to configure certificates. This example uses the certificates from the Maistra repository . For production, use your own certificates from your certificate authority. You must deploy the Bookinfo sample application to verify the results with these instructions. 2.6.3.1. Adding an existing certificate and key To use an existing signing (CA) certificate and key, you must create a chain of trust file that includes the CA certificate, key, and root certificate. You must use the following exact file names for each of the corresponding certificates. The CA certificate is called ca-cert.pem , the key is ca-key.pem , and the root certificate, which signs ca-cert.pem , is called root-cert.pem . If your workload uses intermediate certificates, you must specify them in a cert-chain.pem file. Add the certificates to Service Mesh by following these steps. Save the example certificates from the Maistra repo locally and replace <path> with the path to your certificates. Create a secret cacert that includes the input files ca-cert.pem , ca-key.pem , root-cert.pem and cert-chain.pem . USD oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem \ --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem \ --from-file=<path>/cert-chain.pem In the ServiceMeshControlPlane resource set global.mtls.enabled to true and security.selfSigned set to false . Service Mesh reads the certificates and key from the secret-mount files. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false To make sure the workloads add the new certificates promptly, delete the secrets generated by Service Mesh, named istio.* . In this example, istio.default . Service Mesh issues new certificates for the workloads. USD oc delete secret istio.default 2.6.3.2. Verifying your certificates Use the Bookinfo sample application to verify your certificates are mounted correctly. First, retrieve the mounted certificates. Then, verify the certificates mounted on the pod. Store the pod name in the variable RATINGSPOD . USD RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'` Run the following commands to retrieve the certificates mounted on the proxy. USD oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem The file /tmp/pod-root-cert.pem contains the root certificate propagated to the pod. USD oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem The file /tmp/pod-cert-chain.pem contains the workload certificate and the CA certificate propagated to the pod. Verify the root certificate is the same as the one specified by the Operator. Replace <path> with the path to your certificates. USD openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt USD openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt USD diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt Expect the output to be empty. Verify the CA certificate is the same as the one specified by Operator. Replace <path> with the path to your certificates. USD sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem USD openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt USD openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt USD diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt Expect the output to be empty. Verify the certificate chain from the root certificate to the workload certificate. Replace <path> with the path to your certificates. USD head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem USD openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem Example output /tmp/pod-cert-chain-workload.pem: OK 2.6.3.3. Removing the certificates To remove the certificates you added, follow these steps. Remove the secret cacerts . USD oc delete secret cacerts -n istio-system Redeploy Service Mesh with a self-signed root certificate in the ServiceMeshControlPlane resource. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true 2.7. Traffic management Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can control the flow of traffic and API calls between services in Red Hat OpenShift Service Mesh. For example, some services in your service mesh may need to communicate within the mesh and others may need to be hidden. Manage the traffic to hide specific backend services, expose services, create testing or versioning deployments, or add a security layer on a set of services. 2.7.1. Using gateways You can use a gateway to manage inbound and outbound traffic for your mesh to specify which traffic you want to enter or leave the mesh. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. Unlike other mechanisms for controlling traffic entering your systems, such as the Kubernetes Ingress APIs, Red Hat OpenShift Service Mesh gateways use the full power and flexibility of traffic routing. The Red Hat OpenShift Service Mesh gateway resource can use layer 4-6 load balancing properties, such as ports, to expose and configure Red Hat OpenShift Service Mesh TLS settings. Instead of adding application-layer traffic routing (L7) to the same API resource, you can bind a regular Red Hat OpenShift Service Mesh virtual service to the gateway and manage gateway traffic like any other data plane traffic in a service mesh. Gateways are primarily used to manage ingress traffic, but you can also configure egress gateways. An egress gateway lets you configure a dedicated exit node for the traffic leaving the mesh. This enables you to limit which services have access to external networks, which adds security control to your service mesh. You can also use a gateway to configure a purely internal proxy. Gateway example A gateway resource describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. The specification describes a set of ports that should be exposed, the type of protocol to use, SNI configuration for the load balancer, and so on. The following example shows a sample gateway configuration for external HTTPS ingress traffic: apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key This gateway configuration lets HTTPS traffic from ext-host.example.com into the mesh on port 443, but doesn't specify any routing for the traffic. To specify routing and for the gateway to work as intended, you must also bind the gateway to a virtual service. You do this using the virtual service's gateways field, as shown in the following example: apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy You can then configure the virtual service with routing rules for the external traffic. 2.7.2. Configuring an ingress gateway An ingress gateway is a load balancer operating at the edge of the mesh that receives incoming HTTP/TCP connections. It configures exposed ports and protocols but does not include any traffic routing configuration. Traffic routing for ingress traffic is instead configured with routing rules, the same way as for internal service requests. The following steps show how to create a gateway and configure a VirtualService to expose a service in the Bookinfo sample application to outside traffic for paths /productpage and /login . Procedure Create a gateway to accept traffic. Create a YAML file, and copy the following YAML into it. Gateway example gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" Apply the YAML file. USD oc apply -f gateway.yaml Create a VirtualService object to rewrite the host header. Create a YAML file, and copy the following YAML into it. Virtual service example apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 Apply the YAML file. USD oc apply -f vs.yaml Test that the gateway and VirtualService have been set correctly. Set the Gateway URL. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Set the port number. In this example, istio-system is the name of the Service Mesh control plane project. export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}') Test a page that has been explicitly exposed. curl -s -I "USDGATEWAY_URL/productpage" The expected result is 200 . 2.7.3. Managing ingress traffic In Red Hat OpenShift Service Mesh, the Ingress Gateway enables features such as monitoring, security, and route rules to apply to traffic that enters the cluster. Use a Service Mesh gateway to expose a service outside of the service mesh. 2.7.3.1. Determining the ingress IP and ports Ingress configuration differs depending on if your environment supports an external load balancer. An external load balancer is set in the ingress IP and ports for the cluster. To determine if your cluster's IP and ports are configured for external load balancers, run the following command. In this example, istio-system is the name of the Service Mesh control plane project. USD oc get svc istio-ingressgateway -n istio-system That command returns the NAME , TYPE , CLUSTER-IP , EXTERNAL-IP , PORT(S) , and AGE of each item in your namespace. If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> , or perpetually <pending> , your environment does not provide an external load balancer for the ingress gateway. 2.7.3.1.1. Determining ingress ports with a load balancer Follow these instructions if your environment has an external load balancer. Procedure Run the following command to set the ingress IP and ports. This command sets a variable in your terminal. USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') Run the following command to set the ingress port. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}') Note In some environments, the load balancer may be exposed using a hostname instead of an IP address. For that case, the ingress gateway's EXTERNAL-IP value is not an IP address. Instead, it's a hostname, and the command fails to set the INGRESS_HOST environment variable. In that case, use the following command to correct the INGRESS_HOST value: USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') 2.7.3.1.2. Determining ingress ports without a load balancer If your environment does not have an external load balancer, determine the ingress ports and use a node port instead. Procedure Set the ingress ports. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].nodePort}') 2.7.4. Automatic route creation OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. 2.7.4.1. Enabling Automatic Route Creation A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. Enable IOR as part of the control plane deployment. If the Gateway contains a TLS section, the OpenShift Route will be configured to support TLS. In the ServiceMeshControlPlane resource, add the ior_enabled parameter and set it to true . For example, see the following resource snippet: spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true 2.7.4.2. Subdomains Red Hat OpenShift Service Mesh creates the route with the subdomain, but OpenShift Container Platform must be configured to enable it. Subdomains, for example *.domain.com , are supported but not by default. Configure an OpenShift Container Platform wildcard policy before configuring a wildcard host Gateway. For more information, see the "Links" section. If the following gateway is created: apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com Then, the following OpenShift Routes are created automatically. You can check that the routes are created with the following command. USD oc -n <control_plane_namespace> get routes Expected output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None If the gateway is deleted, Red Hat OpenShift Service Mesh deletes the routes. However, routes created manually are never modified by Red Hat OpenShift Service Mesh. 2.7.5. Understanding service entries A service entry adds an entry to the service registry that Red Hat OpenShift Service Mesh maintains internally. After you add the service entry, the Envoy proxies send traffic to the service as if it is a service in your mesh. Service entries allow you to do the following: Manage traffic for services that run outside of the service mesh. Redirect and forward traffic for external destinations (such as, APIs consumed from the web) or traffic to services in legacy infrastructure. Define retry, timeout, and fault injection policies for external destinations. Run a mesh service in a Virtual Machine (VM) by adding VMs to your mesh. Note Add services from a different cluster to the mesh to configure a multicluster Red Hat OpenShift Service Mesh mesh on Kubernetes. Service entry examples The following example is a mesh-external service entry that adds the ext-resource external dependency to the Red Hat OpenShift Service Mesh service registry: apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS Specify the external resource using the hosts field. You can qualify it fully or use a wildcard prefixed domain name. You can configure virtual services and destination rules to control traffic to a service entry in the same way you configure traffic for any other service in the mesh. For example, the following destination rule configures the traffic route to use mutual TLS to secure the connection to the ext-svc.example.com external service that is configured using the service entry: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem 2.7.6. Using VirtualServices You can route requests dynamically to multiple versions of a microservice through Red Hat OpenShift Service Mesh with a virtual service. With virtual services, you can: Address multiple application services through a single virtual service. If your mesh uses Kubernetes, for example, you can configure a virtual service to handle all services in a specific namespace. A virtual service enables you to turn a monolithic application into a service consisting of distinct microservices with a seamless consumer experience. Configure traffic rules in combination with gateways to control ingress and egress traffic. 2.7.6.1. Configuring VirtualServices Requests are routed to services within a service mesh with virtual services. Each virtual service consists of a set of routing rules that are evaluated in order. Red Hat OpenShift Service Mesh matches each given request to the virtual service to a specific real destination within the mesh. Without virtual services, Red Hat OpenShift Service Mesh distributes traffic using least requests load balancing between all service instances. With a virtual service, you can specify traffic behavior for one or more hostnames. Routing rules in the virtual service tell Red Hat OpenShift Service Mesh how to send the traffic for the virtual service to appropriate destinations. Route destinations can be versions of the same service or entirely different services. Procedure Create a YAML file using the following example to route requests to different versions of the Bookinfo sample application service depending on which user connects to the application. Example VirtualService.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3 Run the following command to apply VirtualService.yaml , where VirtualService.yaml is the path to the file. USD oc apply -f <VirtualService.yaml> 2.7.6.2. VirtualService configuration reference Parameter Description The hosts field lists the virtual service's destination address to which the routing rules apply. This is the address(es) that are used to send requests to the service. The virtual service hostname can be an IP address, a DNS name, or a short name that resolves to a fully qualified domain name. The http section contains the virtual service's routing rules which describe match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent to the destination as specified in the hosts field. A routing rule consists of the destination where you want the traffic to go and any specified match conditions. The first routing rule in the example has a condition that begins with the match field. In this example, this routing applies to all requests from the user jason . Add the headers , end-user , and exact fields to select the appropriate requests. The destination field in the route section specifies the actual destination for traffic that matches this condition. Unlike the virtual service's host, the destination's host must be a real destination that exists in the Red Hat OpenShift Service Mesh service registry. This can be a mesh service with proxies or a non-mesh service added using a service entry. In this example, the hostname is a Kubernetes service name: 2.7.7. Understanding destination rules Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic's real destination. Virtual services route traffic to a destination. Destination rules configure what happens to traffic at that destination. By default, Red Hat OpenShift Service Mesh uses a least requests load balancing policy, where the service instance in the pool with the least number of active connections receives the request. Red Hat OpenShift Service Mesh also supports the following models, which you can specify in destination rules for requests to a particular service or service subset. Random: Requests are forwarded at random to instances in the pool. Weighted: Requests are forwarded to instances in the pool according to a specific percentage. Least requests: Requests are forwarded to instances with the least number of requests. Destination rule example The following example destination rule configures three different subsets for the my-svc destination service, with different load balancing policies: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3 This guide references the Bookinfo sample application to provide examples of routing in an example application. Install the Bookinfo application to learn how these routing examples work. 2.7.8. Bookinfo routing tutorial The Service Mesh Bookinfo sample application consists of four separate microservices, each with multiple versions. After installing the Bookinfo sample application, three different versions of the reviews microservice run concurrently. When you access the Bookinfo app /product page in a browser and refresh several times, sometimes the book review output contains star ratings and other times it does not. Without an explicit default service version to route to, Service Mesh routes requests to all available versions one after the other. This tutorial helps you apply rules that route all traffic to v1 (version 1) of the microservices. Later, you can apply a rule to route traffic based on the value of an HTTP request header. Prerequisites Deploy the Bookinfo sample application to work with the following examples. 2.7.8.1. Applying a virtual service In the following procedure, the virtual service routes all traffic to v1 of each micro-service by applying virtual services that set the default version for the micro-services. Procedure Apply the virtual services. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/virtual-service-all-v1.yaml To verify that you applied the virtual services, display the defined routes with the following command: USD oc get virtualservices -o yaml That command returns a resource of kind: VirtualService in YAML format. You have configured Service Mesh to route to the v1 version of the Bookinfo microservices including the reviews service version 1. 2.7.8.2. Testing the new route configuration Test the new configuration by refreshing the /productpage of the Bookinfo application. Procedure Set the value for the GATEWAY_URL parameter. You can use this variable to find the URL for your Bookinfo product page later. In this example, istio-system is the name of the control plane project. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Run the following command to retrieve the URL for the product page. echo "http://USDGATEWAY_URL/productpage" Open the Bookinfo site in your browser. The reviews part of the page displays with no rating stars, no matter how many times you refresh. This is because you configured Service Mesh to route all traffic for the reviews service to the version reviews:v1 and this version of the service does not access the star ratings service. Your service mesh now routes traffic to one version of a service. 2.7.8.3. Route based on user identity Change the route configuration so that all traffic from a specific user is routed to a specific service version. In this case, all traffic from a user named jason will be routed to the service reviews:v2 . Service Mesh does not have any special, built-in understanding of user identity. This example is enabled by the fact that the productpage service adds a custom end-user header to all outbound HTTP requests to the reviews service. Procedure Run the following command to enable user-based routing in the Bookinfo sample application. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml Run the following command to confirm the rule is created. This command returns all resources of kind: VirtualService in YAML format. USD oc get virtualservice reviews -o yaml On the /productpage of the Bookinfo app, log in as user jason with no password. Refresh the browser. The star ratings appear to each review. Log in as another user (pick any name you want). Refresh the browser. Now the stars are gone. Traffic is now routed to reviews:v1 for all users except Jason. You have successfully configured the Bookinfo sample application to route traffic based on user identity. 2.7.9. Additional resources For more information about configuring an OpenShift Container Platform wildcard policy, see Using wildcard routes . 2.8. Deploying applications on Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . When you deploy an application into the Service Mesh, there are several differences between the behavior of applications in the upstream community version of Istio and the behavior of applications within a Red Hat OpenShift Service Mesh installation. 2.8.1. Prerequisites Review Comparing Red Hat OpenShift Service Mesh and upstream Istio community installations Review Installing Red Hat OpenShift Service Mesh 2.8.2. Creating control plane templates You can create reusable configurations with ServiceMeshControlPlane templates. Individual users can extend the templates they create with their own configurations. Templates can also inherit configuration information from other templates. For example, you can create an accounting control plane for the accounting team and a marketing control plane for the marketing team. If you create a development template and a production template, members of the marketing team and the accounting team can extend the development and production templates with team specific customization. When you configure control plane templates, which follow the same syntax as the ServiceMeshControlPlane , users inherit settings in a hierarchical fashion. The Operator is delivered with a default template with default settings for Red Hat OpenShift Service Mesh. To add custom templates you must create a ConfigMap named smcp-templates in the openshift-operators project and mount the ConfigMap in the Operator container at /usr/local/share/istio-operator/templates . 2.8.2.1. Creating the ConfigMap Follow this procedure to create the ConfigMap. Prerequisites An installed, verified Service Mesh Operator. An account with the cluster-admin role. Location of the Operator deployment. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a cluster administrator. From the CLI, run this command to create the ConfigMap named smcp-templates in the openshift-operators project and replace <templates-directory> with the location of the ServiceMeshControlPlane files on your local disk: USD oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators Locate the Operator ClusterServiceVersion name. USD oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh' Example output maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded Edit the Operator cluster service version to instruct the Operator to use the smcp-templates ConfigMap. USD oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0 Add a volume mount and volume to the Operator deployment. deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates ... Save your changes and exit the editor. You can now use the template parameter in the ServiceMeshControlPlane to specify a template. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default 2.8.3. Enabling automatic sidecar injection When deploying an application, you must opt-in to injection by configuring the label sidecar.istio.io/inject in spec.template.metadata.labels to true in the deployment object. Opting in ensures that the sidecar injection does not interfere with other OpenShift Container Platform features such as builder pods used by numerous frameworks within the OpenShift Container Platform ecosystem. Prerequisites Identify the namespaces that are part of your service mesh and the deployments that need automatic sidecar injection. Procedure To find your deployments use the oc get command. USD oc get deployment -n <namespace> For example, to view the Deployment YAML file for the 'ratings-v1' microservice in the bookinfo namespace, use the following command to see the resource in YAML format. oc get deployment -n bookinfo ratings-v1 -o yaml Open the application's Deployment YAML file in an editor. Add spec.template.metadata.labels.sidecar.istio/inject to your Deployment YAML file and set sidecar.istio.io/inject to true as shown in the following example. Example snippet from bookinfo deployment-ratings-v1.yaml apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true' Note Using the annotations parameter when enabling automatic sidecar injection is deprecated and is replaced by using the labels parameter. Save the Deployment YAML file. Add the file back to the project that contains your app. USD oc apply -n <namespace> -f deployment.yaml In this example, bookinfo is the name of the project that contains the ratings-v1 app and deployment-ratings-v1.yaml is the file you edited. USD oc apply -n bookinfo -f deployment-ratings-v1.yaml To verify that the resource uploaded successfully, run the following command. USD oc get deployment -n <namespace> <deploymentName> -o yaml For example, USD oc get deployment -n bookinfo ratings-v1 -o yaml 2.8.4. Setting proxy environment variables through annotations Configuration for the Envoy sidecar proxies is managed by the ServiceMeshControlPlane . You can set environment variables for the sidecar proxy for applications by adding pod annotations to the deployment in the injection-template.yaml file. The environment variables are injected to the sidecar. Example injection-template.yaml apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: "{ \"maistra_test_env\": \"env_value\", \"maistra_test_env_2\": \"env_value_2\" }" Warning You should never include maistra.io/ labels and annotations when creating your own custom resources. These labels and annotations indicate that the resources are generated and managed by the Operator. If you are copying content from an Operator-generated resource when creating your own resources, do not include labels or annotations that start with maistra.io/ . Resources that include these labels or annotations will be overwritten or deleted by the Operator during the reconciliation. 2.8.5. Updating Mixer policy enforcement In versions of Red Hat OpenShift Service Mesh, Mixer's policy enforcement was enabled by default. Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks. Prerequisites Access to the OpenShift CLI ( oc ). Note The examples use istio-system as the control plane namespace. Replace this value with the namespace where you deployed the Service Mesh Control Plane (SMCP). Procedure Log in to the OpenShift Container Platform CLI. Run this command to check the current Mixer policy enforcement status: USD oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks If disablePolicyChecks: true , edit the Service Mesh ConfigMap: USD oc edit cm -n istio-system istio Locate disablePolicyChecks: true within the ConfigMap and change the value to false . Save the configuration and exit the editor. Re-check the Mixer policy enforcement status to ensure it is set to false . 2.8.5.1. Setting the correct network policy Service Mesh creates network policies in the Service Mesh control plane and member namespaces to allow traffic between them. Before you deploy, consider the following conditions to ensure the services in your service mesh that were previously exposed through an OpenShift Container Platform route. Traffic into the service mesh must always go through the ingress-gateway for Istio to work properly. Deploy services external to the service mesh in separate namespaces that are not in any service mesh. Non-mesh services that need to be deployed within a service mesh enlisted namespace should label their deployments maistra.io/expose-route: "true" , which ensures OpenShift Container Platform routes to these services still work. 2.8.6. Bookinfo example application The Bookinfo example application allows you to test your Red Hat OpenShift Service Mesh 2.5.2 installation on OpenShift Container Platform. The Bookinfo application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, book details (ISBN, number of pages, and other information), and book reviews. The Bookinfo application consists of these microservices: The productpage microservice calls the details and reviews microservices to populate the page. The details microservice contains book information. The reviews microservice contains book reviews. It also calls the ratings microservice. The ratings microservice contains book ranking information that accompanies a book review. There are three versions of the reviews microservice: Version v1 does not call the ratings Service. Version v2 calls the ratings Service and displays each rating as one to five black stars. Version v3 calls the ratings Service and displays each rating as one to five red stars. 2.8.6.1. Installing the Bookinfo application This tutorial walks you through how to create a sample application by creating a project, deploying the Bookinfo application to that project, and viewing the running application in Service Mesh. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.5.2 installed. Access to the OpenShift CLI ( oc ). You are logged in to OpenShift Container Platform as`cluster-admin`. Note The Bookinfo sample application cannot be installed on IBM Z and IBM Power. Note The commands in this section assume the Service Mesh control plane project is istio-system . If you installed the control plane in another namespace, edit each command before you run it. Procedure Click Home Projects . Click Create Project . Enter bookinfo as the Project Name , enter a Display Name , and enter a Description , then click Create . Alternatively, you can run this command from the CLI to create the bookinfo project. USD oc new-project bookinfo Click Operators Installed Operators . Click the Project menu and use the Service Mesh control plane namespace. In this example, use istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. If you have already created a Istio Service Mesh Member Roll, click the name, then click the YAML tab to open the YAML editor. If you have not created a ServiceMeshMemberRoll , click Create ServiceMeshMemberRoll . Click Members , then enter the name of your project in the Value field. Click Create to save the updated Service Mesh Member Roll. Or, save the following example to a YAML file. Bookinfo ServiceMeshMemberRoll example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo Run the following command to upload that file and create the ServiceMeshMemberRoll resource in the istio-system namespace. In this example, istio-system is the name of the Service Mesh control plane project. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system -o wide The installation has finished successfully when the STATUS column is Configured . NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s ["bookinfo"] From the CLI, deploy the Bookinfo application in the `bookinfo` project by applying the bookinfo.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/platform/kube/bookinfo.yaml You should see output similar to the following: service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created Create the ingress gateway by applying the bookinfo-gateway.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/bookinfo-gateway.yaml You should see output similar to the following: gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created Set the value for the GATEWAY_URL parameter: USD export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') 2.8.6.2. Adding default destination rules Before you can use the Bookinfo application, you must first add default destination rules. There are two preconfigured YAML files, depending on whether or not you enabled mutual transport layer security (TLS) authentication. Procedure To add destination rules, run one of the following commands: If you did not enable mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/destination-rule-all.yaml If you enabled mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/destination-rule-all-mtls.yaml You should see output similar to the following: destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created 2.8.6.3. Verifying the Bookinfo installation To confirm that the sample Bookinfo application was successfully deployed, perform the following steps. Prerequisites Red Hat OpenShift Service Mesh installed. Complete the steps for installing the Bookinfo sample app. You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure from CLI Verify that all pods are ready with this command: USD oc get pods -n bookinfo All pods should have a status of Running . You should see output similar to the following: NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m Run the following command to retrieve the URL for the product page: echo "http://USDGATEWAY_URL/productpage" Copy and paste the output in a web browser to verify the Bookinfo product page is deployed. Procedure from Kiali web console Obtain the address for the Kiali web console. Log in to the OpenShift Container Platform web console. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. Click the link in the Location column for Kiali. Click Log In With OpenShift . The Kiali Overview screen presents tiles for each project namespace. In Kiali, click Graph . Select bookinfo from the Namespace list, and App graph from the Graph Type list. Click Display idle nodes from the Display menu. This displays nodes that are defined but have not received or sent requests. It can confirm that an application is properly defined, but that no request traffic has been reported. Use the Duration menu to increase the time period to help ensure older traffic is captured. Use the Refresh Rate menu to refresh traffic more or less often, or not at all. Click Services , Workloads or Istio Config to see list views of bookinfo components, and confirm that they are healthy. 2.8.6.4. Removing the Bookinfo application Follow these steps to remove the Bookinfo application. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.5.2 installed. Access to the OpenShift CLI ( oc ). 2.8.6.4.1. Delete the Bookinfo project Procedure Log in to the OpenShift Container Platform web console. Click to Home Projects . Click the bookinfo menu , and then click Delete Project . Type bookinfo in the confirmation dialog box, and then click Delete . Alternatively, you can run this command using the CLI to create the bookinfo project. USD oc delete project bookinfo 2.8.6.4.2. Remove the Bookinfo project from the Service Mesh member roll Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click the Project menu and choose istio-system from the list. Click the Istio Service Mesh Member Roll link under Provided APIS for the Red Hat OpenShift Service Mesh Operator. Click the ServiceMeshMemberRoll menu and select Edit Service Mesh Member Roll . Edit the default Service Mesh Member Roll YAML and remove bookinfo from the members list. Alternatively, you can run this command using the CLI to remove the bookinfo project from the ServiceMeshMemberRoll . In this example, istio-system is the name of the Service Mesh control plane project. USD oc -n istio-system patch --type='json' smmr default -p '[{"op": "remove", "path": "/spec/members", "value":["'"bookinfo"'"]}]' Click Save to update Service Mesh Member Roll. 2.8.7. Generating example traces and analyzing trace data Jaeger is an open source distributed tracing system. With Jaeger, you can perform a trace that follows the path of a request through various microservices which make up an application. Jaeger is installed by default as part of the Service Mesh. This tutorial uses Service Mesh and the Bookinfo sample application to demonstrate how you can use Jaeger to perform distributed tracing. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.5.2 installed. Jaeger enabled during the installation. Bookinfo example application installed. Procedure After installing the Bookinfo sample application, send traffic to the mesh. Enter the following command several times. USD curl "http://USDGATEWAY_URL/productpage" This command simulates a user visiting the productpage microservice of the application. In the OpenShift Container Platform console, navigate to Networking Routes and search for the Jaeger route, which is the URL listed under Location . Alternatively, use the CLI to query for details of the route. In this example, istio-system is the Service Mesh control plane namespace: USD export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}') Enter the following command to reveal the URL for the Jaeger console. Paste the result in a browser and navigate to that URL. echo USDJAEGER_URL Log in using the same user name and password as you use to access the OpenShift Container Platform console. In the left pane of the Jaeger dashboard, from the Service menu, select productpage.bookinfo and click Find Traces at the bottom of the pane. A list of traces is displayed. Click one of the traces in the list to open a detailed view of that trace. If you click the first one in the list, which is the most recent trace, you see the details that correspond to the latest refresh of the /productpage . 2.9. Data visualization and observability Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can view your application's topology, health and metrics in the Kiali console. If your service is having issues, the Kiali console offers ways to visualize the data flow through your service. You can view insights about the mesh components at different levels, including abstract applications, services, and workloads. It also provides an interactive graph view of your namespace in real time. Before you begin You can observe the data flow through your application if you have an application installed. If you don't have your own application installed, you can see how observability works in Red Hat OpenShift Service Mesh by installing the Bookinfo sample application . 2.9.1. Viewing service mesh data The Kiali operator works with the telemetry data gathered in Red Hat OpenShift Service Mesh to provide graphs and real-time network diagrams of the applications, services, and workloads in your namespace. To access the Kiali console you must have Red Hat OpenShift Service Mesh installed and projects configured for the service mesh. Procedure Use the perspective switcher to switch to the Administrator perspective. Click Home Projects . Click the name of your project. For example, click bookinfo . In the Launcher section, click Kiali . Log in to the Kiali console with the same user name and password that you use to access the OpenShift Container Platform console. When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your service mesh that you have permission to view. If you are validating the console installation, there might not be any data to display. 2.9.2. Viewing service mesh data in the Kiali console The Kiali Graph offers a powerful visualization of your mesh traffic. The topology combines real-time request traffic with your Istio configuration information to present immediate insight into the behavior of your service mesh, letting you quickly pinpoint issues. Multiple Graph Types let you visualize traffic as a high-level service topology, a low-level workload topology, or as an application-level topology. There are several graphs to choose from: The App graph shows an aggregate workload for all applications that are labeled the same. The Service graph shows a node for each service in your mesh but excludes all applications and workloads from the graph. It provides a high level view and aggregates all traffic for defined services. The Versioned App graph shows a node for each version of an application. All versions of an application are grouped together. The Workload graph shows a node for each workload in your service mesh. This graph does not require you to use the application and version labels. If your application does not use version labels, use this the graph. Graph nodes are decorated with a variety of information, pointing out various route routing options like virtual services and service entries, as well as special configuration like fault-injection and circuit breakers. It can identify mTLS issues, latency issues, error traffic and more. The Graph is highly configurable, can show traffic animation, and has powerful Find and Hide abilities. Click the Legend button to view information about the shapes, colors, arrows, and badges displayed in the graph. To view a summary of metrics, select any node or edge in the graph to display its metric details in the summary details panel. 2.9.2.1. Changing graph layouts in Kiali The layout for the Kiali graph can render differently depending on your application architecture and the data to display. For example, the number of graph nodes and their interactions can determine how the Kiali graph is rendered. Because it is not possible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. Prerequisites If you do not have your own application installed, install the Bookinfo sample application. Then generate traffic for the Bookinfo application by entering the following command several times. USD curl "http://USDGATEWAY_URL/productpage" This command simulates a user visiting the productpage microservice of the application. Procedure Launch the Kiali console. Click Log In With OpenShift . In Kiali console, click Graph to view a namespace graph. From the Namespace menu, select your application namespace, for example, bookinfo . To choose a different graph layout, do either or both of the following: Select different graph data groupings from the menu at the top of the graph. App graph Service graph Versioned App graph (default) Workload graph Select a different graph layout from the Legend at the bottom of the graph. Layout default dagre Layout 1 cose-bilkent Layout 2 cola 2.10. Custom resources Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can customize your Red Hat OpenShift Service Mesh by modifying the default Service Mesh custom resource or by creating a new custom resource. 2.10.1. Prerequisites An account with the cluster-admin role. Completed the Preparing to install Red Hat OpenShift Service Mesh process. Have installed the operators. 2.10.2. Red Hat OpenShift Service Mesh custom resources Note The istio-system project is used as an example throughout the Service Mesh documentation, but you can use other projects as necessary. A custom resource allows you to extend the API in an Red Hat OpenShift Service Mesh project or cluster. When you deploy Service Mesh it creates a default ServiceMeshControlPlane that you can modify to change the project parameters. The Service Mesh operator extends the API by adding the ServiceMeshControlPlane resource type, which enables you to create ServiceMeshControlPlane objects within projects. By creating a ServiceMeshControlPlane object, you instruct the Operator to install a Service Mesh control plane into the project, configured with the parameters you set in the ServiceMeshControlPlane object. This example ServiceMeshControlPlane definition contains all of the supported parameters and deploys Red Hat OpenShift Service Mesh 1.1.18.2 images based on Red Hat Enterprise Linux (RHEL). Important The 3scale Istio Adapter is deployed and configured in the custom resource file. It also requires a working 3scale account ( SaaS or On-Premises ). Example istio-installation.yaml apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one 2.10.3. ServiceMeshControlPlane parameters The following examples illustrate use of the ServiceMeshControlPlane parameters and the tables provide additional information about supported parameters. Important The resources you configure for Red Hat OpenShift Service Mesh with these parameters, including CPUs, memory, and the number of pods, are based on the configuration of your OpenShift Container Platform cluster. Configure these parameters based on the available resources in your current cluster configuration. 2.10.3.1. Istio global example Here is an example that illustrates the Istio global parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Note In order for the 3scale Istio Adapter to work, disablePolicyChecks must be false . Example global parameters istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret Table 2.4. Global parameters Parameter Description Values Default value disablePolicyChecks This parameter enables/disables policy checks. true / false true policyCheckFailOpen This parameter indicates whether traffic is allowed to pass through to the Envoy sidecar when the Mixer policy service cannot be reached. true / false false tag The tag that the Operator uses to pull the Istio images. A valid container image tag. 1.1.0 hub The hub that the Operator uses to pull Istio images. A valid image repository. maistra/ or registry.redhat.io/openshift-service-mesh/ mtls This parameter controls whether to enable/disable Mutual Transport Layer Security (mTLS) between services by default. true / false false imagePullSecrets If access to the registry providing the Istio images is secure, list an imagePullSecret here. redhat-registry-pullsecret OR quay-pullsecret None These parameters are specific to the proxy subset of global parameters. Table 2.5. Proxy parameters Type Parameter Description Values Default value requests cpu The amount of CPU resources requested for Envoy proxy. CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration. 10m memory The amount of memory requested for Envoy proxy Available memory in bytes(for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi limits cpu The maximum amount of CPU resources requested for Envoy proxy. CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration. 2000m memory The maximum amount of memory Envoy proxy is permitted to use. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 1024Mi 2.10.3.2. Istio gateway configuration Here is an example that illustrates the Istio gateway parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example gateway parameters gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 Table 2.6. Istio Gateway parameters Parameter Description Values Default value gateways.egress.runtime.deployment.autoScaling.enabled This parameter enables/disables autoscaling. true / false true gateways.egress.runtime.deployment.autoScaling.minReplicas The minimum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 gateways.egress.runtime.deployment.autoScaling.maxReplicas The maximum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 gateways.ingress.runtime.deployment.autoScaling.enabled This parameter enables/disables autoscaling. true / false true gateways.ingress.runtime.deployment.autoScaling.minReplicas The minimum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 gateways.ingress.runtime.deployment.autoScaling.maxReplicas The maximum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 Cluster administrators can refer to Using wildcard routes for instructions on how to enable subdomains. 2.10.3.3. Istio Mixer configuration Here is an example that illustrates the Mixer parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example mixer parameters mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits: Table 2.7. Istio Mixer policy parameters Parameter Description Values Default value enabled This parameter enables/disables Mixer. true / false true autoscaleEnabled This parameter enables/disables autoscaling. Disable this for small environments. true / false true autoscaleMin The minimum number of pods to deploy based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 autoscaleMax The maximum number of pods to deploy based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 Table 2.8. Istio Mixer telemetry parameters Type Parameter Description Values Default requests cpu The percentage of CPU resources requested for Mixer telemetry. CPU resources in millicores based on your environment's configuration. 10m memory The amount of memory requested for Mixer telemetry. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi limits cpu The maximum percentage of CPU resources Mixer telemetry is permitted to use. CPU resources in millicores based on your environment's configuration. 4800m memory The maximum amount of memory Mixer telemetry is permitted to use. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 4G 2.10.3.4. Istio Pilot configuration You can configure Pilot to schedule or set limits on resource allocation. The following example illustrates the Pilot parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example pilot parameters spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M Table 2.9. Istio Pilot parameters Parameter Description Values Default value cpu The percentage of CPU resources requested for Pilot. CPU resources in millicores based on your environment's configuration. 10m memory The amount of memory requested for Pilot. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi autoscaleEnabled This parameter enables/disables autoscaling. Disable this for small environments. true / false true traceSampling This value controls how often random sampling occurs. Note: Increase for development or testing. A valid percentage. 1.0 2.10.4. Configuring Kiali When the Service Mesh Operator creates the ServiceMeshControlPlane it also processes the Kiali resource. The Kiali Operator then uses this object when creating Kiali instances. The default Kiali parameters specified in the ServiceMeshControlPlane are as follows: Example Kiali parameters apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true Table 2.10. Kiali parameters Parameter Description Values Default value This parameter enables/disables Kiali. Kiali is enabled by default. true / false true This parameter enables/disables view-only mode for the Kiali console. When view-only mode is enabled, users cannot use the console to make changes to the Service Mesh. true / false false This parameter enables/disables ingress for Kiali. true / false true 2.10.4.1. Configuring Kiali for Grafana When you install Kiali and Grafana as part of Red Hat OpenShift Service Mesh the Operator configures the following by default: Grafana is enabled as an external service for Kiali Grafana authorization for the Kiali console Grafana URL for the Kiali console Kiali can automatically detect the Grafana URL. However if you have a custom Grafana installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource. Additional Grafana parameters spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: "https://grafana-istio-system.127.0.0.1.nip.io" ingress: enabled: true 2.10.4.2. Configuring Kiali for Jaeger When you install Kiali and Jaeger as part of Red Hat OpenShift Service Mesh the Operator configures the following by default: Jaeger is enabled as an external service for Kiali Jaeger authorization for the Kiali console Jaeger URL for the Kiali console Kiali can automatically detect the Jaeger URL. However if you have a custom Jaeger installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource. Additional Jaeger parameters spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: "http://jaeger-query-istio-system.127.0.0.1.nip.io" ingress: enabled: true 2.10.5. Configuring Jaeger When the Service Mesh Operator creates the ServiceMeshControlPlane resource it can also create the resources for distributed tracing. Service Mesh uses Jaeger for distributed tracing. You can specify your Jaeger configuration in either of two ways: Configure Jaeger in the ServiceMeshControlPlane resource. There are some limitations with this approach. Configure Jaeger in a custom Jaeger resource and then reference that Jaeger instance in the ServiceMeshControlPlane resource. If a Jaeger resource matching the value of name exists, the control plane will use the existing installation. This approach lets you fully customize your Jaeger configuration. The default Jaeger parameters specified in the ServiceMeshControlPlane are as follows: Default all-in-one Jaeger parameters apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one Table 2.11. Jaeger parameters Parameter Description Values Default value This parameter enables/disables installing and deploying tracing by the Service Mesh Operator. Installing Jaeger is enabled by default. To use an existing Jaeger deployment, set this value to false . true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one - For development, testing, demonstrations, and proof of concept. production-elasticsearch - For production use. all-in-one Note The default template in the ServiceMeshControlPlane resource is the all-in-one deployment strategy which uses in-memory storage. For production, the only supported storage option is Elasticsearch, therefore you must configure the ServiceMeshControlPlane to request the production-elasticsearch template when you deploy Service Mesh within a production environment. 2.10.5.1. Configuring Elasticsearch The default Jaeger deployment strategy uses the all-in-one template so that the installation can be completed using minimal resources. However, because the all-in-one template uses in-memory storage, it is only recommended for development, demo, or testing purposes and should NOT be used for production environments. If you are deploying Service Mesh and Jaeger in a production environment you must change the template to the production-elasticsearch template, which uses Elasticsearch for Jaeger's storage needs. Elasticsearch is a memory intensive application. The initial set of nodes specified in the default OpenShift Container Platform installation may not be large enough to support the Elasticsearch cluster. You should modify the default Elasticsearch configuration to match your use case and the resources you have requested for your OpenShift Container Platform installation. You can adjust both the CPU and memory limits for each component by modifying the resources block with valid CPU and memory values. Additional nodes must be added to the cluster if you want to run with the recommended amount (or more) of memory. Ensure that you do not exceed the resources requested for your OpenShift Container Platform installation. Default "production" Jaeger parameters with Elasticsearch apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: "1" memory: "16Gi" limits: cpu: "1" memory: "16Gi" Table 2.12. Elasticsearch parameters Parameter Description Values Default Value Examples This parameter enables/disables tracing in Service Mesh. Jaeger is installed by default. true / false true This parameter enables/disables ingress for Jaeger. true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one / production-elasticsearch all-in-one Number of Elasticsearch nodes to create. Integer value. 1 Proof of concept = 1, Minimum deployment =3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). 1Gi Proof of concept = 500m, Minimum deployment =1 Available memory for requests, based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). 500m Proof of concept = 1Gi, Minimum deployment = 16Gi* Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). Proof of concept = 1Gi, Minimum deployment = 16Gi* * Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your control plane file, for example, basic-install . Click the YAML tab. Edit the Jaeger parameters, replacing the default all-in-one template with parameters for the production-elasticsearch template, modified for your use case. Ensure that the indentation is correct. Click Save . Click Reload . OpenShift Container Platform redeploys Jaeger and creates the Elasticsearch resources based on the specified parameters. 2.10.5.2. Connecting to an existing Jaeger instance In order for the SMCP to connect to an existing Jaeger instance, the following must be true: The Jaeger instance is deployed in the same namespace as the control plane, for example, into the istio-system namespace. To enable secure communication between services, you should enable the oauth-proxy, which secures communication to your Jaeger instance, and make sure the secret is mounted into your Jaeger instance so Kiali can communicate with it. To use a custom or already existing Jaeger instance, set spec.istio.tracing.enabled to "false" to disable the deployment of a Jaeger instance. Supply the correct jaeger-collector endpoint to Mixer by setting spec.istio.global.tracer.zipkin.address to the hostname and port of your jaeger-collector service. The hostname of the service is usually <jaeger-instance-name>-collector.<namespace>.svc.cluster.local . Supply the correct jaeger-query endpoint to Kiali for gathering traces by setting spec.istio.kiali.jaegerInClusterURL to the hostname of your jaeger-query service - the port is normally not required, as it uses 443 by default. The hostname of the service is usually <jaeger-instance-name>-query.<namespace>.svc.cluster.local . Supply the dashboard URL of your Jaeger instance to Kiali to enable accessing Jaeger through the Kiali console. You can retrieve the URL from the OpenShift route that is created by the Jaeger Operator. If your Jaeger resource is called external-jaeger and resides in the istio-system project, you can retrieve the route using the following command: USD oc get route -n istio-system external-jaeger Example output NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...] The value under HOST/PORT is the externally accessible URL of the Jaeger dashboard. Example Jaeger resource apiVersion: jaegertracing.io/v1 kind: "Jaeger" metadata: name: "external-jaeger" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{"namespace": "istio-system", "resource": "pods", "verb": "get"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd The following ServiceMeshControlPlane example assumes that you have deployed Jaeger using the Jaeger Operator and the example Jaeger resource. Example ServiceMeshControlPlane with external Jaeger apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local 2.10.5.3. Configuring Elasticsearch The default Jaeger deployment strategy uses the all-in-one template so that the installation can be completed using minimal resources. However, because the all-in-one template uses in-memory storage, it is only recommended for development, demo, or testing purposes and should NOT be used for production environments. If you are deploying Service Mesh and Jaeger in a production environment you must change the template to the production-elasticsearch template, which uses Elasticsearch for Jaeger's storage needs. Elasticsearch is a memory intensive application. The initial set of nodes specified in the default OpenShift Container Platform installation may not be large enough to support the Elasticsearch cluster. You should modify the default Elasticsearch configuration to match your use case and the resources you have requested for your OpenShift Container Platform installation. You can adjust both the CPU and memory limits for each component by modifying the resources block with valid CPU and memory values. Additional nodes must be added to the cluster if you want to run with the recommended amount (or more) of memory. Ensure that you do not exceed the resources requested for your OpenShift Container Platform installation. Default "production" Jaeger parameters with Elasticsearch apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: "1" memory: "16Gi" limits: cpu: "1" memory: "16Gi" Table 2.13. Elasticsearch parameters Parameter Description Values Default Value Examples This parameter enables/disables tracing in Service Mesh. Jaeger is installed by default. true / false true This parameter enables/disables ingress for Jaeger. true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one / production-elasticsearch all-in-one Number of Elasticsearch nodes to create. Integer value. 1 Proof of concept = 1, Minimum deployment =3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). 1Gi Proof of concept = 500m, Minimum deployment =1 Available memory for requests, based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). 500m Proof of concept = 1Gi, Minimum deployment = 16Gi* Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). Proof of concept = 1Gi, Minimum deployment = 16Gi* * Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your control plane file, for example, basic-install . Click the YAML tab. Edit the Jaeger parameters, replacing the default all-in-one template with parameters for the production-elasticsearch template, modified for your use case. Ensure that the indentation is correct. Click Save . Click Reload . OpenShift Container Platform redeploys Jaeger and creates the Elasticsearch resources based on the specified parameters. 2.10.5.4. Configuring the Elasticsearch index cleaner job When the Service Mesh Operator creates the ServiceMeshControlPlane it also creates the custom resource (CR) for Jaeger. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator then uses this CR when creating Jaeger instances. When using Elasticsearch storage, by default a job is created to clean old traces from it. To configure the options for this job, you edit the Jaeger custom resource (CR), to customize it for your use case. The relevant options are listed below. apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: "55 23 * * *" Table 2.14. Elasticsearch index cleaner parameters Parameter Values Description enabled: true/ false Enable or disable the index cleaner job. numberOfDays: integer value Number of days to wait before deleting an index. schedule: "55 23 * * *" Cron expression for the job to run For more information about configuring Elasticsearch with OpenShift Container Platform, see Configuring the Elasticsearch log store . 2.10.6. 3scale configuration The following table explains the parameters for the 3scale Istio Adapter in the ServiceMeshControlPlane resource. Example 3scale parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true # ... Table 2.15. 3scale parameters Parameter Description Values Default value enabled Whether to use the 3scale adapter true / false false PARAM_THREESCALE_LISTEN_ADDR Sets the listen address for the gRPC server Valid port number 3333 PARAM_THREESCALE_LOG_LEVEL Sets the minimum log output level. debug , info , warn , error , or none info PARAM_THREESCALE_LOG_JSON Controls whether the log is formatted as JSON true / false true PARAM_THREESCALE_LOG_GRPC Controls whether the log contains gRPC info true / false true PARAM_THREESCALE_REPORT_METRICS Controls whether 3scale system and backend metrics are collected and reported to Prometheus true / false true PARAM_THREESCALE_METRICS_PORT Sets the port that the 3scale /metrics endpoint can be scrapped from Valid port number 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS Time period, in seconds, to wait before purging expired items from the cache Time period in seconds 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS Time period before expiry when cache elements are attempted to be refreshed Time period in seconds 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX Max number of items that can be stored in the cache at any time. Set to 0 to disable caching Valid number 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES The number of times unreachable hosts are retried during a cache update loop Valid number 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN Allow to skip certificate verification when calling 3scale APIs. Enabling this is not recommended. true / false false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS Sets the number of seconds to wait before terminating requests to 3scale System and Backend Time period in seconds 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS Sets the maximum amount of seconds (+/-10% jitter) a connection may exist before it is closed Time period in seconds 60 PARAM_USE_CACHE_BACKEND If true, attempt to create an in-memory apisonator cache for authorization requests true / false false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS If the backend cache is enabled, this sets the interval in seconds for flushing the cache against 3scale Time period in seconds 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED Whenever the backend cache cannot retrieve authorization data, whether to deny (closed) or allow (open) requests true / false true 2.11. Using the 3scale Istio adapter Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . The 3scale Istio Adapter is an optional adapter that allows you to label a service running within the Red Hat OpenShift Service Mesh and integrate that service with the 3scale API Management solution. It is not required for Red Hat OpenShift Service Mesh. 2.11.1. Integrate the 3scale adapter with Red Hat OpenShift Service Mesh You can use these examples to configure requests to your services using the 3scale Istio Adapter. Prerequisites Red Hat OpenShift Service Mesh version 1.x A working 3scale account ( SaaS or 3scale 2.5 On-Premises ) Enabling backend cache requires 3scale 2.9 or greater Red Hat OpenShift Service Mesh prerequisites Note To configure the 3scale Istio Adapter, refer to Red Hat OpenShift Service Mesh custom resources for instructions on adding adapter parameters to the custom resource file. Note Pay particular attention to the kind: handler resource. You must update this with your 3scale account credentials. You can optionally add a service_id to a handler, but this is kept for backwards compatibility only, since it would render the handler only useful for one service in your 3scale account. If you add service_id to a handler, enabling 3scale for other services requires you to create more handlers with different service_ids . Use a single handler per 3scale account by following the steps below: Procedure Create a handler for your 3scale account and specify your account credentials. Omit any service identifier. apiVersion: "config.istio.io/v1alpha2" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: "https://<organization>-admin.3scale.net/" access_token: "<ACCESS_TOKEN>" connection: address: "threescale-istio-adapter:3333" Optionally, you can provide a backend_url field within the params section to override the URL provided by the 3scale configuration. This may be useful if the adapter runs on the same cluster as the 3scale on-premise instance, and you wish to leverage the internal cluster DNS. Edit or patch the Deployment resource of any services belonging to your 3scale account as follows: Add the "service-mesh.3scale.net/service-id" label with a value corresponding to a valid service_id . Add the "service-mesh.3scale.net/credentials" label with its value being the name of the handler resource from step 1. Do step 2 to link it to your 3scale account credentials and to its service identifier, whenever you intend to add more services. Modify the rule configuration with your 3scale configuration to dispatch the rule to the threescale handler. Rule configuration example apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: threescale spec: match: destination.labels["service-mesh.3scale.net"] == "true" actions: - handler: threescale.handler instances: - threescale-authorization.instance 2.11.1.1. Generating 3scale custom resources The adapter includes a tool that allows you to generate the handler , instance , and rule custom resources. Table 2.16. Usage Option Description Required Default value -h, --help Produces help output for available options No --name Unique name for this URL, token pair Yes -n, --namespace Namespace to generate templates No istio-system -t, --token 3scale access token Yes -u, --url 3scale Admin Portal URL Yes --backend-url 3scale backend URL. If set, it overrides the value that is read from system configuration No -s, --service 3scale API/Service ID No --auth 3scale authentication pattern to specify (1=API Key, 2=App Id/App Key, 3=OIDC) No Hybrid -o, --output File to save produced manifests to No Standard output --version Outputs the CLI version and exits immediately No 2.11.1.1.1. Generate templates from URL examples Note Run the following commands via oc exec from the 3scale adapter container image in Generating manifests from a deployed adapter . Use the 3scale-config-gen command to help avoid YAML syntax and indentation errors. You can omit the --service if you use the annotations. This command must be invoked from within the container image via oc exec . Procedure Use the 3scale-config-gen command to autogenerate templates files allowing the token, URL pair to be shared by multiple services as a single handler: The following example generates the templates with the service ID embedded in the handler: Additional resources Tokens . 2.11.1.2. Generating manifests from a deployed adapter Note NAME is an identifier you use to identify with the service you are managing with 3scale. The CREDENTIALS_NAME reference is an identifier that corresponds to the match section in the rule configuration. This is automatically set to the NAME identifier if you are using the CLI tool. Its value does not need to be anything specific: the label value should just match the contents of the rule. See Routing service traffic through the adapter for more information. Run this command to generate manifests from a deployed adapter in the istio-system namespace: This will produce sample output to the terminal. Edit these samples if required and create the objects using the oc create command. When the request reaches the adapter, the adapter needs to know how the service maps to an API on 3scale. You can provide this information in two ways: Label the workload (recommended) Hard code the handler as service_id Update the workload with the required annotations: Note You only need to update the service ID provided in this example if it is not already embedded in the handler. The setting in the handler takes precedence . 2.11.1.3. Routing service traffic through the adapter Follow these steps to drive traffic for your service through the 3scale adapter. Prerequisites Credentials and service ID from your 3scale administrator. Procedure Match the rule destination.labels["service-mesh.3scale.net/credentials"] == "threescale" that you previously created in the configuration, in the kind: rule resource. Add the above label to PodTemplateSpec on the Deployment of the target workload to integrate a service. the value, threescale , refers to the name of the generated handler. This handler stores the access token required to call 3scale. Add the destination.labels["service-mesh.3scale.net/service-id"] == "replace-me" label to the workload to pass the service ID to the adapter via the instance at request time. 2.11.2. Configure the integration settings in 3scale Follow this procedure to configure the 3scale integration settings. Note For 3scale SaaS customers, Red Hat OpenShift Service Mesh is enabled as part of the Early Access program. Procedure Navigate to [your_API_name] Integration Click Settings . Select the Istio option under Deployment . The API Key (user_key) option under Authentication is selected by default. Click Update Product to save your selection. Click Configuration . Click Update Configuration . 2.11.3. Caching behavior Responses from 3scale System APIs are cached by default within the adapter. Entries will be purged from the cache when they become older than the cacheTTLSeconds value. Also by default, automatic refreshing of cached entries will be attempted seconds before they expire, based on the cacheRefreshSeconds value. You can disable automatic refreshing by setting this value higher than the cacheTTLSeconds value. Caching can be disabled entirely by setting cacheEntriesMax to a non-positive value. By using the refreshing process, cached values whose hosts become unreachable will be retried before eventually being purged when past their expiry. 2.11.4. Authenticating requests This release supports the following authentication methods: Standard API Keys : single randomized strings or hashes acting as an identifier and a secret token. Application identifier and key pairs : immutable identifier and mutable secret key strings. OpenID authentication method : client ID string parsed from the JSON Web Token. 2.11.4.1. Applying authentication patterns Modify the instance custom resource, as illustrated in the following authentication method examples, to configure authentication behavior. You can accept the authentication credentials from: Request headers Request parameters Both request headers and query parameters Note When specifying values from headers, they must be lower case. For example, if you want to send a header as User-Key , this must be referenced in the configuration as request.headers["user-key"] . 2.11.4.1.1. API key authentication method Service Mesh looks for the API key in query parameters and request headers as specified in the user option in the subject custom resource parameter. It checks the values in the order given in the custom resource file. You can restrict the search for the API key to either query parameters or request headers by omitting the unwanted option. In this example, Service Mesh looks for the API key in the user_key query parameter. If the API key is not in the query parameter, Service Mesh then checks the user-key header. API key authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the API key in a query parameter named "key", change request.query_params["user_key"] to request.query_params["key"] . 2.11.4.1.2. Application ID and application key pair authentication method Service Mesh looks for the application ID and application key in query parameters and request headers, as specified in the properties option in the subject custom resource parameter. The application key is optional. It checks the values in the order given in the custom resource file. You can restrict the search for the credentials to either query parameters or request headers by not including the unwanted option. In this example, Service Mesh looks for the application ID and application key in the query parameters first, moving on to the request headers if needed. Application ID and application key pair authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the application ID in a query parameter named identification , change request.query_params["app_id"] to request.query_params["identification"] . 2.11.4.1.3. OpenID authentication method To use the OpenID Connect (OIDC) authentication method , use the properties value on the subject field to set client_id , and optionally app_key . You can manipulate this object using the methods described previously. In the example configuration shown below, the client identifier (application ID) is parsed from the JSON Web Token (JWT) under the label azp . You can modify this as needed. OpenID authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" For this integration to work correctly, OIDC must still be done in 3scale for the client to be created in the identity provider (IdP). You should create a Request authorization for the service you want to protect in the same namespace as that service. The JWT is passed in the Authorization header of the request. In the sample RequestAuthentication defined below, replace issuer , jwksUri , and selector as appropriate. OpenID Policy example apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs 2.11.4.1.4. Hybrid authentication method You can choose to not enforce a particular authentication method and accept any valid credentials for either method. If both an API key and an application ID/application key pair are provided, Service Mesh uses the API key. In this example, Service Mesh checks for an API key in the query parameters, then the request headers. If there is no API key, it then checks for an application ID and key in the query parameters, then the request headers. Hybrid authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | properties: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" 2.11.5. 3scale Adapter metrics The adapter, by default reports various Prometheus metrics that are exposed on port 8080 at the /metrics endpoint. These metrics provide insight into how the interactions between the adapter and 3scale are performing. The service is labeled to be automatically discovered and scraped by Prometheus. 2.11.6. 3scale Istio adapter verification You might want to check whether the 3scale Istio adapter is working as expected. If your adapter is not working, use the following steps to help troubleshoot the problem. Procedure Ensure the 3scale-adapter pod is running in the Service Mesh control plane namespace: USD oc get pods -n istio-system Check that the 3scale-adapter pod has printed out information about itself booting up, such as its version: USD oc logs istio-system When performing requests to the services protected by the 3scale adapter integration, always try requests that lack the right credentials and ensure they fail. Check the 3scale adapter logs to gather additional information. Additional resources Inspecting pod and container logs . 2.11.7. 3scale Istio adapter troubleshooting checklist As the administrator installing the 3scale Istio adapter, there are a number of scenarios that might be causing your integration to not function properly. Use the following list to troubleshoot your installation: Incorrect YAML indentation. Missing YAML sections. Forgot to apply the changes in the YAML to the cluster. Forgot to label the service workloads with the service-mesh.3scale.net/credentials key. Forgot to label the service workloads with service-mesh.3scale.net/service-id when using handlers that do not contain a service_id so they are reusable per account. The Rule custom resource points to the wrong handler or instance custom resources, or the references lack the corresponding namespace suffix. The Rule custom resource match section cannot possibly match the service you are configuring, or it points to a destination workload that is not currently running or does not exist. Wrong access token or URL for the 3scale Admin Portal in the handler. The Instance custom resource's params/subject/properties section fails to list the right parameters for app_id , app_key , or client_id , either because they specify the wrong location such as the query parameters, headers, and authorization claims, or the parameter names do not match the requests used for testing. Failing to use the configuration generator without realizing that it actually lives in the adapter container image and needs oc exec to invoke it. 2.12. Removing Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . To remove Red Hat OpenShift Service Mesh from an existing OpenShift Container Platform instance, remove the control plane before removing the operators. 2.12.1. Removing the Red Hat OpenShift Service Mesh control plane To uninstall Service Mesh from an existing OpenShift Container Platform instance, first you delete the Service Mesh control plane and the Operators. Then, you run commands to remove residual resources. 2.12.1.1. Removing the Service Mesh control plane using the web console You can remove the Red Hat OpenShift Service Mesh control plane by using the web console. Procedure Log in to the OpenShift Container Platform web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Navigate to Operators Installed Operators . Click Service Mesh Control Plane under Provided APIs . Click the ServiceMeshControlPlane menu . Click Delete Service Mesh Control Plane . Click Delete on the confirmation dialog window to remove the ServiceMeshControlPlane . 2.12.1.2. Removing the Service Mesh control plane using the CLI You can remove the Red Hat OpenShift Service Mesh control plane by using the CLI. In this example, istio-system is the name of the control plane project. Procedure Log in to the OpenShift Container Platform CLI. Run the following command to delete the ServiceMeshMemberRoll resource. USD oc delete smmr -n istio-system default Run this command to retrieve the name of the installed ServiceMeshControlPlane : USD oc get smcp -n istio-system Replace <name_of_custom_resource> with the output from the command, and run this command to remove the custom resource: USD oc delete smcp -n istio-system <name_of_custom_resource> 2.12.2. Removing the installed Operators You must remove the Operators to successfully remove Red Hat OpenShift Service Mesh. After you remove the Red Hat OpenShift Service Mesh Operator, you must remove the Kiali Operator, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator, and the OpenShift Elasticsearch Operator. 2.12.2.1. Removing the Operators Follow this procedure to remove the Operators that make up Red Hat OpenShift Service Mesh. Repeat the steps for each of the following Operators. Red Hat OpenShift Service Mesh Kiali Red Hat OpenShift distributed tracing platform (Jaeger) OpenShift Elasticsearch Procedure Log in to the OpenShift Container Platform web console. From the Operators Installed Operators page, scroll or type a keyword into the Filter by name to find each Operator. Then, click the Operator name. On the Operator Details page, select Uninstall Operator from the Actions menu. Follow the prompts to uninstall each Operator. 2.12.2.2. Clean up Operator resources Follow this procedure to manually remove resources left behind after removing the Red Hat OpenShift Service Mesh Operator using the OpenShift Container Platform web console. Prerequisites An account with cluster administration access. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a cluster administrator. Run the following commands to clean up resources after uninstalling the Operators. If you intend to keep using Jaeger as a stand alone service without service mesh, do not delete the Jaeger resources. Note The Operators are installed in the openshift-operators namespace by default. If you installed the Operators in another namespace, replace openshift-operators with the name of the project where the Red Hat OpenShift Service Mesh Operator was installed. USD oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io USD oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io USD oc delete -n openshift-operators daemonset/istio-node USD oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni USD oc delete clusterrole istio-view istio-edit USD oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view USD oc get crds -o name | grep '.*\.istio\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.maistra\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.kiali\.io' | xargs -r -n 1 oc delete USD oc delete crds jaegers.jaegertracing.io USD oc delete svc admission-controller -n <operator-project> USD oc delete project <istio-system-project>
[ "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.5", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.5 gather <namespace>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]", "spec: global: pathNormalization: <option>", "{ \"runtime\": { \"symlink_root\": \"/var/lib/istio/envoy/runtime\" } }", "oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap", "oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to \"v1.0\" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings", "oc get jaeger -n istio-system", "NAME AGE jaeger 3d21h", "oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml", "oc delete jaeger jaeger -n istio-system", "oc create -f /tmp/jaeger-cr.yaml -n istio-system", "rm /tmp/jaeger-cr.yaml", "oc delete -f <jaeger-cr-file>", "oc delete -f jaeger-prod-elasticsearch.yaml", "oc create -f <jaeger-cr-file>", "oc get pods -n jaeger-system -w", "spec: version: v1.1", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.headers[<header>]: \"value\"", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.regex.headers[<header>]: \"<regular expression>\"", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project istio-system", "oc create -n istio-system -f istio-installation.yaml", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic-install 11/11 ComponentsReady [\"default\"] v1.1.18 4m25s", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project <your-project>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system default", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true", "apiVersion: \"authentication.istio.io/v1alpha1\" kind: \"Policy\" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {}", "apiVersion: \"networking.istio.io/v1alpha3\" kind: \"DestinationRule\" metadata: name: \"default\" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: \"*.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3", "oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false", "oc delete secret istio.default", "RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'`", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem", "openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt", "openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt", "diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt", "sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem", "openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt", "openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt", "diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt", "head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem", "openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem", "/tmp/pod-cert-chain-workload.pem: OK", "oc delete secret cacerts -n istio-system", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"", "oc apply -f gateway.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080", "oc apply -f vs.yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')", "curl -s -I \"USDGATEWAY_URL/productpage\"", "oc get svc istio-ingressgateway -n istio-system", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')", "spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com", "oc -n <control_plane_namespace> get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None", "apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3", "oc apply -f <VirtualService.yaml>", "spec: hosts:", "spec: http: - match:", "spec: http: - match: - destination:", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/virtual-service-all-v1.yaml", "oc get virtualservices -o yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "echo \"http://USDGATEWAY_URL/productpage\"", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml", "oc get virtualservice reviews -o yaml", "oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators", "oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh'", "maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded", "oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0", "deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default", "oc get deployment -n <namespace>", "get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true'", "oc apply -n <namespace> -f deployment.yaml", "oc apply -n bookinfo -f deployment-ratings-v1.yaml", "oc get deployment -n <namespace> <deploymentName> -o yaml", "oc get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"", "oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks", "oc edit cm -n istio-system istio", "oc new-project bookinfo", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system -o wide", "NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/platform/kube/bookinfo.yaml", "service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/bookinfo-gateway.yaml", "gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/destination-rule-all.yaml", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/destination-rule-all-mtls.yaml", "destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m", "echo \"http://USDGATEWAY_URL/productpage\"", "oc delete project bookinfo", "oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'", "curl \"http://USDGATEWAY_URL/productpage\"", "export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')", "echo USDJAEGER_URL", "curl \"http://USDGATEWAY_URL/productpage\"", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one", "istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret", "gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1", "mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits:", "spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true", "enabled", "dashboard viewOnlyMode", "ingress enabled", "spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: \"https://grafana-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: \"http://jaeger-query-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one", "tracing: enabled:", "jaeger: template:", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "oc get route -n istio-system external-jaeger", "NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...]", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"external-jaeger\" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: \"55 23 * * *\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true", "apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance", "3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"", "3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"", "export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}", "export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "oc get pods -n istio-system", "oc logs istio-system", "oc delete smmr -n istio-system default", "oc get smcp -n istio-system", "oc delete smcp -n istio-system <name_of_custom_resource>", "oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete -n openshift-operators daemonset/istio-node", "oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni", "oc delete clusterrole istio-view istio-edit", "oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view", "oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete", "oc delete crds jaegers.jaegertracing.io", "oc delete svc admission-controller -n <operator-project>", "oc delete project <istio-system-project>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/service_mesh/service-mesh-1-x
Chapter 64. PasswordSource schema reference
Chapter 64. PasswordSource schema reference Used in: HashLoginServiceApiUsers , Password Property Property type Description secretKeyRef SecretKeySelector Selects a key of a Secret in the resource's namespace.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-passwordsource-reference
3.6. Networking
3.6. Networking kernel component Some e1000e NICs may not get an IPv4 address assigned after the system is rebooted. To work around this issue, add the following line to the /etc/sysconfig/network-scripts/ifcfg-eth <X> file: NetworkManager component, BZ# 758076 If a Certificate Authority (CA) certificate is not selected when configuring an 802.1x or WPA-Enterprise connection, a dialog appears indicating that a missing CA certificate is a security risk. This dialog presents two options: ignore the missing CA certificate and proceed with the insecure connection, or choose a CA certificate. If the user elects to choose a CA certificate, this dialog disappears and the user may select the CA certificate in the original configuration dialog. samba component Current Samba versions shipped with Red Hat Enterprise Linux 6.3 are not able to fully control the user and group database when using the ldapsam_compat back end. This back end was never designed to run a production LDAP and Samba environment for a long period of time. The ldapsam_compat back end was created as a tool to ease migration from historical Samba releases (version 2.2.x) to Samba version 3 and greater using the new ldapsam back end and the new LDAP schema. The ldapsam_compat back end lack various important LDAP attributes and object classes in order to fully provide full user and group management. In particular, it cannot allocate user and group IDs. In the Red Hat Enterprise Linux Reference Guide , it is pointed out that this back end is likely to be deprecated in future releases. Refer to Samba's documentation for instructions on how to migrate existing setups to the new LDAP schema. When you are not able to upgrade to the new LDAP schema (though upgrading is strongly recommended and is the preferred solution), you may work around this issue by keeping a dedicated machine running an older version of Samba (v2.2.x) for the purpose of user account management. Alternatively, you can create user accounts with standard LDIF files. The important part is the assignment of user and group IDs. In that case, the old Samba 2.2 algorithmic mapping from Windows RIDs to Unix IDs is the following: user RID = UID * 2 + 1000 , while for groups it is: group RID = GID * 2 + 1001 . With these workarounds, users can continue using the ldapsam_compat back end with their existing LDAP setup even when all the above restrictions apply. kernel component, BZ# 816888 Running the QFQ queuing discipline in a virtual guest eventually results in kernel panic. kernel component Because RHEL6.3 defaults to using Strict Reverse Path filtering, packets are dropped by default when the route for outbound traffic differs from the route of incoming traffic. This is in line with current recommended practice in RFC3704. For more information about this issue please refer to /usr/share/doc/kernel-doc- <version> /Documentation/networking/ip-sysctl.txt and https://access.redhat.com/site/solutions/53031 . perftest component The rdma_bw and rdma_lat utilities (provided by the perftest package) are now deprecated and will be removed from the perftest package in a future update. Users should use the following utilities instead: ib_write_bw , ib_write_lat , ib_read_bw , and ib_read_lat .
[ "LINKDELAY=10" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/networking_issues
18.12.10.2. VLAN (802.1Q)
18.12.10.2. VLAN (802.1Q) Protocol ID: vlan Rules of this type should go either into the root or vlan chain. Table 18.4. VLAN protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination vlan-id UINT16 (0x0-0xfff, 0 - 4095) VLAN ID encap-protocol UINT16 (0x03c-0xfff), String Encapsulated layer 3 protocol ID, valid strings are arp, ipv4, ipv6 comment STRING text string up to 256 characters
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-vlan
Chapter 4. Installing a cluster on GCP with customizations
Chapter 4. Installing a cluster on GCP with customizations In OpenShift Container Platform version 4.17, you can install a customized cluster on infrastructure that the installation program provisions on Google Cloud Platform (GCP). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on GCP". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 4.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.5.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 4.1. Machine series A2 A3 C2 C2D C3 C3D E2 M1 N1 N2 N2D N4 Tau T2D 4.5.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 4.2. Machine series for 64-bit ARM machines Tau T2A 4.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 4.5.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 4.5.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 4.5.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 1 15 17 18 21 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a compute machine set 4.5.8. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.6. Managing user-defined labels and tags for GCP Google Cloud Platform (GCP) provides labels and tags that help to identify and organize the resources created for a specific OpenShift Container Platform cluster, making them easier to manage. You can define labels and tags for each GCP resource only during OpenShift Container Platform cluster installation. Important User-defined labels and tags are not supported for OpenShift Container Platform clusters upgraded to OpenShift Container Platform 4.17. Note You cannot update the tags that are already added. Also, a new tag-supported resource creation fails if the configured tag keys or tag values are deleted. User-defined labels User-defined labels and OpenShift Container Platform specific labels are applied only to resources created by OpenShift Container Platform installation program and its core components such as: GCP filestore CSI Driver Operator GCP PD CSI Driver Operator Image Registry Operator Machine API provider for GCP User-defined labels are not attached to the resources created by any other Operators or the Kubernetes in-tree components. User-defined labels and OpenShift Container Platform labels are available on the following GCP resources: Compute disk Compute forwarding rule Compute image Compute instance DNS managed zone Filestore backup Filestore instance Storage bucket Limitations to user-defined labels Labels for ComputeAddress are supported in the GCP beta version. OpenShift Container Platform does not add labels to the resource. User-defined tags User-defined tags are applied only to resources created by OpenShift Container Platform installation program and its core components, such as the following resources: GCP FileStore CSI Driver Operator GCP PD CSI Driver Operator Image Registry Operator Machine API provider for GCP User-defined tags are not attached to the resources created by any other Operators or the Kubernetes in-tree components. User-defined tags are available on the following GCP resources: Compute disk Compute instance Filestore backup Filestore instance Storage bucket Limitations to the user-defined tags Tags must not be restricted to particular service accounts, because Operators create and use service accounts with minimal roles. OpenShift Container Platform does not create any key and value resources of the tag. OpenShift Container Platform specific tags are not added to any resource. Additional resources For more information about identifying the OrganizationID , see: OrganizationID For more information about identifying the ProjectID , see: ProjectID For more information about labels, see Labels Overview . For more information about tags, see Tags Overview . 4.6.1. Configuring user-defined labels and tags for GCP Prerequisites The installation program requires that a service account includes a TagUser role, so that the program can create the OpenShift Container Platform cluster with defined tags at both organization and project levels. Procedure Update the install-config.yaml file to define the list of desired labels and tags. Note Labels and tags are defined during the install-config.yaml creation phase, and cannot be modified or updated with new labels and tags after cluster creation. Sample install-config.yaml file apiVersion: v1 featureSet: TechPreviewNoUpgrade platform: gcp: userLabels: 1 - key: <label_key> 2 value: <label_value> 3 userTags: 4 - parentID: <OrganizationID/ProjectID> 5 key: <tag_key_short_name> value: <tag_value_short_name> 1 Adds keys and values as labels to the resources created on GCP. 2 Defines the label name. 3 Defines the label content. 4 Adds keys and values as tags to the resources created on GCP. 5 The ID of the hierarchical resource where the tags are defined, at the organization or the project level. The following are the requirements for user-defined labels: A label key and value must have a minimum of 1 character and can have a maximum of 63 characters. A label key and value must contain only lowercase letters, numeric characters, underscore ( _ ), and dash ( - ). A label key must start with a lowercase letter. You can configure a maximum of 32 labels per resource. Each resource can have a maximum of 64 labels, and 32 labels are reserved for internal use by OpenShift Container Platform. The following are the requirements for user-defined tags: Tag key and tag value must already exist. OpenShift Container Platform does not create the key and the value. A tag parentID can be either OrganizationID or ProjectID : OrganizationID must consist of decimal numbers without leading zeros. ProjectID must be 6 to 30 characters in length, that includes only lowercase letters, numbers, and hyphens. ProjectID must start with a letter, and cannot end with a hyphen. A tag key must contain only uppercase and lowercase alphanumeric characters, hyphen ( - ), underscore ( _ ), and period ( . ). A tag value must contain only uppercase and lowercase alphanumeric characters, hyphen ( - ), underscore ( _ ), period ( . ), at sign ( @ ), percent sign ( % ), equals sign ( = ), plus ( + ), colon ( : ), comma ( , ), asterisk ( * ), pound sign ( USD ), ampersand ( & ), parentheses ( () ), square braces ( [] ), curly braces ( {} ), and space. A tag key and value must begin and end with an alphanumeric character. Tag value must be one of the pre-defined values for the key. You can configure a maximum of 50 tags. There should be no tag key defined with the same value as any of the existing tag keys that will be inherited from the parent resource. 4.6.2. Querying user-defined labels and tags for GCP After creating the OpenShift Container Platform cluster, you can access the list of the labels and tags defined for the GCP resources in the infrastructures.config.openshift.io/cluster object as shown in the following sample infrastructure.yaml file. Sample infrastructure.yaml file apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: platformSpec: type: GCP status: infrastructureName: <cluster_id> 1 platform: GCP platformStatus: gcp: resourceLabels: - key: <label_key> value: <label_value> resourceTags: - key: <tag_key_short_name> parentID: <OrganizationID/ProjectID> value: <tag_value_short_name> type: GCP 1 The cluster ID that is generated during cluster installation. Along with the user-defined labels, resources have a label defined by the OpenShift Container Platform. The format of the OpenShift Container Platform labels is kubernetes-io-cluster-<cluster_id>:owned . 4.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 4.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 4.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 4.8.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 4.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 4.4. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 4.8.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 4.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 4.5. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 4.9. Using the GCP Marketplace offering Using the GCP Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through GCP, while still being supported directly by Red Hat. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to deploy compute machines. To deploy an OpenShift Container Platform cluster using an RHCOS image from the GCP Marketplace, override the default behavior by modifying the install-config.yaml file to reference the location of GCP Marketplace offer. Prerequisites You have an existing install-config.yaml file. Procedure Edit the compute.platform.gcp.osImage parameters to specify the location of the GCP Marketplace image: Set the project parameter to redhat-marketplace-public Set the name parameter to one of the following offers: OpenShift Container Platform redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine redhat-coreos-oke-413-x86-64-202305021736 Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies a GCP Marketplace image for compute machines apiVersion: v1 baseDomain: example.com controlPlane: # ... compute: platform: gcp: osImage: project: redhat-marketplace-public name: redhat-coreos-ocp-413-x86-64-202305021736 # ... 4.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 featureSet: TechPreviewNoUpgrade platform: gcp: userLabels: 1 - key: <label_key> 2 value: <label_value> 3 userTags: 4 - parentID: <OrganizationID/ProjectID> 5 key: <tag_key_short_name> value: <tag_value_short_name>", "apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: platformSpec: type: GCP status: infrastructureName: <cluster_id> 1 platform: GCP platformStatus: gcp: resourceLabels: - key: <label_key> value: <label_value> resourceTags: - key: <tag_key_short_name> parentID: <OrganizationID/ProjectID> value: <tag_value_short_name> type: GCP", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "apiVersion: v1 baseDomain: example.com controlPlane: compute: platform: gcp: osImage: project: redhat-marketplace-public name: redhat-coreos-ocp-413-x86-64-202305021736", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_gcp/installing-gcp-customizations
Chapter 6. PodSecurityPolicySubjectReview [security.openshift.io/v1]
Chapter 6. PodSecurityPolicySubjectReview [security.openshift.io/v1] Description PodSecurityPolicySubjectReview checks whether a particular user/SA tuple can create the PodTemplateSpec. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds spec object PodSecurityPolicySubjectReviewSpec defines specification for PodSecurityPolicySubjectReview status object PodSecurityPolicySubjectReviewStatus contains information/status for PodSecurityPolicySubjectReview. 6.1.1. .spec Description PodSecurityPolicySubjectReviewSpec defines specification for PodSecurityPolicySubjectReview Type object Required template Property Type Description groups array (string) groups is the groups you're testing for. template PodTemplateSpec template is the PodTemplateSpec to check. If template.spec.serviceAccountName is empty it will not be defaulted. If its non-empty, it will be checked. user string user is the user you're testing for. If you specify "user" but not "group", then is it interpreted as "What if user were not a member of any groups. If user and groups are empty, then the check is performed using only the serviceAccountName in the template. 6.1.2. .status Description PodSecurityPolicySubjectReviewStatus contains information/status for PodSecurityPolicySubjectReview. Type object Property Type Description allowedBy ObjectReference allowedBy is a reference to the rule that allows the PodTemplateSpec. A rule can be a SecurityContextConstraint or a PodSecurityPolicy A nil , indicates that it was denied. reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. template PodTemplateSpec template is the PodTemplateSpec after the defaulting is applied. 6.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicysubjectreviews POST : create a PodSecurityPolicySubjectReview 6.2.1. /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicysubjectreviews Table 6.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a PodSecurityPolicySubjectReview Table 6.2. Body parameters Parameter Type Description body PodSecurityPolicySubjectReview schema Table 6.3. HTTP responses HTTP code Reponse body 200 - OK PodSecurityPolicySubjectReview schema 201 - Created PodSecurityPolicySubjectReview schema 202 - Accepted PodSecurityPolicySubjectReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_apis/podsecuritypolicysubjectreview-security-openshift-io-v1
Installing and using Red Hat build of OpenJDK 21 for Windows
Installing and using Red Hat build of OpenJDK 21 for Windows Red Hat build of OpenJDK 21 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/installing_and_using_red_hat_build_of_openjdk_21_for_windows/index
Chapter 1. The OpenStack Client
Chapter 1. The OpenStack Client The openstack client is a common OpenStack command-line interface (CLI). This chapter documents the main options for openstack version 5.5.2 . Command-line interface to the OpenStack APIs Usage: Table 1.1. Command arguments Value Summary --version Show program's version number and exit -v, --verbose Increase verbosity of output. can be repeated. -q, --quiet Suppress output except warnings and errors. --log-file LOG_FILE Specify a file to log output. disabled by default. -h, --help Show help message and exit. --debug Show tracebacks on errors. --os-cloud <cloud-config-name> Cloud name in clouds.yaml (env: os_cloud) --os-region-name <auth-region-name> Authentication region name (env: os_region_name) --os-cacert <ca-bundle-file> Ca certificate bundle file (env: os_cacert) --os-cert <certificate-file> Client certificate bundle file (env: os_cert) --os-key <key-file> Client certificate key file (env: os_key) --verify Verify server certificate (default) --insecure Disable server certificate verification --os-default-domain <auth-domain> Default domain id, default=default. (env: OS_DEFAULT_DOMAIN) --os-interface <interface> Select an interface type. valid interface types: [admin, public, internal]. default=public, (Env: OS_INTERFACE) --os-service-provider <service_provider> Authenticate with and perform the command on a service provider using Keystone-to-keystone federation. Must also specify the remote project option. --os-remote-project-name <remote_project_name> Project name when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-id <remote_project_id> Project id when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-domain-name <remote_project_domain_name> Domain name of the project when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-domain-id <remote_project_domain_id> Domain id of the project when authenticating to a service provider if using Keystone-to-Keystone federation. --timing Print api call timing info --os-beta-command Enable beta commands which are subject to change --os-profile hmac-key Hmac key for encrypting profiling context data --os-compute-api-version <compute-api-version> Compute api version, default=2.1 (env: OS_COMPUTE_API_VERSION) --os-identity-api-version <identity-api-version> Identity api version, default=3 (env: OS_IDENTITY_API_VERSION) --os-image-api-version <image-api-version> Image api version, default=2 (env: OS_IMAGE_API_VERSION) --os-network-api-version <network-api-version> Network api version, default=2.0 (env: OS_NETWORK_API_VERSION) --os-object-api-version <object-api-version> Object api version, default=1 (env: OS_OBJECT_API_VERSION) --os-volume-api-version <volume-api-version> Volume api version, default=3 (env: OS_VOLUME_API_VERSION) --os-alarming-api-version <alarming-api-version> Queues api version, default=2 (env: OS_ALARMING_API_VERSION) --os-metrics-api-version <metrics-api-version> Metrics api version, default=1 (env: OS_METRICS_API_VERSION) --os-key-manager-api-version <key-manager-api-version> Barbican api version, default=1 (env: OS_KEY_MANAGER_API_VERSION) --os-dns-api-version <dns-api-version> Dns api version, default=2 (env: os_dns_api_version) --os-orchestration-api-version <orchestration-api-version> Orchestration api version, default=1 (env: OS_ORCHESTRATION_API_VERSION) --inspector-api-version INSPECTOR_API_VERSION Inspector api version, only 1 is supported now (env: INSPECTOR_VERSION). --inspector-url INSPECTOR_URL Inspector url, defaults to localhost (env: INSPECTOR_URL). --os-baremetal-api-version <baremetal-api-version> Bare metal api version, default="latest" (the maximum version supported by both the client and the server). (Env: OS_BAREMETAL_API_VERSION) --os-container-infra-api-version <container-infra-api-version> Container-infra api version, default=1 (env: OS_CONTAINER_INFRA_API_VERSION) --os-share-api-version <shared-file-system-api-version> Shared file system api version, default=2.63version supported by both the client and the server). (Env: OS_SHARE_API_VERSION) --os-workflow-api-version <workflow-api-version> Workflow api version, default=2 (env: OS_WORKFLOW_API_VERSION) --os-loadbalancer-api-version <loadbalancer-api-version> Osc plugin api version, default=2.0 (env: OS_LOADBALANCER_API_VERSION) --os-data-processing-api-version <data-processing-api-version> Data processing api version, default=1.1 (env: OS_DATA_PROCESSING_API_VERSION) --os-data-processing-url OS_DATA_PROCESSING_URL Data processing api url, (env: OS_DATA_PROCESSING_API_URL) --os-tripleoclient-api-version <tripleoclient-api-version> Tripleo client api version, default=2 (env: OS_TRIPLEOCLIENT_API_VERSION) --os-database-api-version <database-api-version> Database api version, default=1 (env: OS_DATABASE_API_VERSION) --os-queues-api-version <queues-api-version> Queues api version, default=2 (env: OS_QUEUES_API_VERSION) --os-auth-type <auth-type> Select an authentication type. available types: v3oidcaccesstoken, admin_token, gnocchi-noauth, noauth, v3oidcpassword, gnocchi-basic, v1password, v3samlpassword, v3oauth1, v3totp, http_basic, v2token, v2password, v3adfspassword, v3token, v3applicationcredential, v3oidcauthcode, aodh-noauth, v3multifactor, v3tokenlessauth, token, v3oidcclientcredentials, none, password, v3password. Default: selected based on --os-username/--os-token (Env: OS_AUTH_TYPE) --os-auth-url <auth-auth-url> With v3oidcaccesstoken: authentication url with v3oidcpassword: Authentication URL With v1password: Authentication URL With v3samlpassword: Authentication URL With v3oauth1: Authentication URL With v3totp: Authentication URL With v2token: Authentication URL With v2password: Authentication URL With v3adfspassword: Authentication URL With v3token: Authentication URL With v3applicationcredential: Authentication URL With v3oidcauthcode: Authentication URL With v3multifactor: Authentication URL With v3tokenlessauth: Authentication URL With token: Authentication URL With v3oidcclientcredentials: Authentication URL With password: Authentication URL With v3password: Authentication URL (Env: OS_AUTH_URL) --os-system-scope <auth-system-scope> With v3oidcaccesstoken: scope for system operations With v3oidcpassword: Scope for system operations With v3samlpassword: Scope for system operations With v3totp: Scope for system operations With v3adfspassword: Scope for system operations With v3token: Scope for system operations With v3applicationcredential: Scope for system operations With v3oidcauthcode: Scope for system operations With v3multifactor: Scope for system operations With token: Scope for system operations With v3oidcclientcredentials: Scope for system operations With password: Scope for system operations With v3password: Scope for system operations (Env: OS_SYSTEM_SCOPE) --os-domain-id <auth-domain-id> With v3oidcaccesstoken: domain id to scope to with v3oidcpassword: Domain ID to scope to With v3samlpassword: Domain ID to scope to With v3totp: Domain ID to scope to With v3adfspassword: Domain ID to scope to With v3token: Domain ID to scope to With v3applicationcredential: Domain ID to scope to With v3oidcauthcode: Domain ID to scope to With v3multifactor: Domain ID to scope to With v3tokenlessauth: Domain ID to scope to With token: Domain ID to scope to With v3oidcclientcredentials: Domain ID to scope to With password: Domain ID to scope to With v3password: Domain ID to scope to (Env: OS_DOMAIN_ID) --os-domain-name <auth-domain-name> With v3oidcaccesstoken: domain name to scope to with v3oidcpassword: Domain name to scope to With v3samlpassword: Domain name to scope to With v3totp: Domain name to scope to With v3adfspassword: Domain name to scope to With v3token: Domain name to scope to With v3applicationcredential: Domain name to scope to With v3oidcauthcode: Domain name to scope to With v3multifactor: Domain name to scope to With v3tokenlessauth: Domain name to scope to With token: Domain name to scope to With v3oidcclientcredentials: Domain name to scope to With password: Domain name to scope to With v3password: Domain name to scope to (Env: OS_DOMAIN_NAME) --os-project-id <auth-project-id> With v3oidcaccesstoken: project id to scope to with gnocchi-noauth: Project ID With noauth: Project ID With v3oidcpassword: Project ID to scope to With v3samlpassword: Project ID to scope to With v3totp: Project ID to scope to With v3adfspassword: Project ID to scope to With v3token: Project ID to scope to With v3applicationcredential: Project ID to scope to With v3oidcauthcode: Project ID to scope to With aodh- noauth: Project ID With v3multifactor: Project ID to scope to With v3tokenlessauth: Project ID to scope to With token: Project ID to scope to With v3oidcclientcredentials: Project ID to scope to With password: Project ID to scope to With v3password: Project ID to scope to (Env: OS_PROJECT_ID) --os-project-name <auth-project-name> With v3oidcaccesstoken: project name to scope to with v3oidcpassword: Project name to scope to With v1password: Swift account to use With v3samlpassword: Project name to scope to With v3totp: Project name to scope to With v3adfspassword: Project name to scope to With v3token: Project name to scope to With v3applicationcredential: Project name to scope to With v3oidcauthcode: Project name to scope to With v3multifactor: Project name to scope to With v3tokenlessauth: Project name to scope to With token: Project name to scope to With v3oidcclientcredentials: Project name to scope to With password: Project name to scope to With v3password: Project name to scope to (Env: OS_PROJECT_NAME) --os-project-domain-id <auth-project-domain-id> With v3oidcaccesstoken: domain id containing project With v3oidcpassword: Domain ID containing project With v3samlpassword: Domain ID containing project With v3totp: Domain ID containing project With v3adfspassword: Domain ID containing project With v3token: Domain ID containing project With v3applicationcredential: Domain ID containing project With v3oidcauthcode: Domain ID containing project With v3multifactor: Domain ID containing project With v3tokenlessauth: Domain ID containing project With token: Domain ID containing project With v3oidcclientcredentials: Domain ID containing project With password: Domain ID containing project With v3password: Domain ID containing project (Env: OS_PROJECT_DOMAIN_ID) --os-project-domain-name <auth-project-domain-name> With v3oidcaccesstoken: domain name containing project With v3oidcpassword: Domain name containing project With v3samlpassword: Domain name containing project With v3totp: Domain name containing project With v3adfspassword: Domain name containing project With v3token: Domain name containing project With v3applicationcredential: Domain name containing project With v3oidcauthcode: Domain name containing project With v3multifactor: Domain name containing project With v3tokenlessauth: Domain name containing project With token: Domain name containing project With v3oidcclientcredentials: Domain name containing project With password: Domain name containing project With v3password: Domain name containing project (Env: OS_PROJECT_DOMAIN_NAME) --os-trust-id <auth-trust-id> With v3oidcaccesstoken: trust id with v3oidcpassword: Trust ID With v3samlpassword: Trust ID With v3totp: Trust ID With v2token: Trust ID With v2password: Trust ID With v3adfspassword: Trust ID With v3token: Trust ID With v3applicationcredential: Trust ID With v3oidcauthcode: Trust ID With v3multifactor: Trust ID With token: Trust ID With v3oidcclientcredentials: Trust ID With password: Trust ID With v3password: Trust ID (Env: OS_TRUST_ID) --os-identity-provider <auth-identity-provider> With v3oidcaccesstoken: identity provider's name with v3oidcpassword: Identity Provider's name With v3samlpassword: Identity Provider's name With v3adfspassword: Identity Provider's name With v3oidcauthcode: Identity Provider's name With v3oidcclientcredentials: Identity Provider's name (Env: OS_IDENTITY_PROVIDER) --os-protocol <auth-protocol> With v3oidcaccesstoken: protocol for federated plugin With v3oidcpassword: Protocol for federated plugin With v3samlpassword: Protocol for federated plugin With v3adfspassword: Protocol for federated plugin With v3oidcauthcode: Protocol for federated plugin With v3oidcclientcredentials: Protocol for federated plugin (Env: OS_PROTOCOL) --os-access-token <auth-access-token> With v3oidcaccesstoken: oauth 2.0 access token (env: OS_ACCESS_TOKEN) --os-endpoint <auth-endpoint> With admin_token: the endpoint that will always be used With gnocchi-noauth: Gnocchi endpoint With noauth: Cinder endpoint With gnocchi-basic: Gnocchi endpoint With http_basic: The endpoint that will always be used With none: The endpoint that will always be used (Env: OS_ENDPOINT) --os-token <auth-token> With admin_token: the token that will always be used With v2token: Token With v3token: Token to authenticate with With token: Token to authenticate with (Env: OS_TOKEN) --os-user-id <auth-user-id> With gnocchi-noauth: user id with noauth: user id with v3totp: User ID With v2password: User ID to login with With v3applicationcredential: User ID With aodh- noauth: User ID With password: User id With v3password: User ID (Env: OS_USER_ID) --os-roles <auth-roles> With gnocchi-noauth: roles with aodh-noauth: roles (Env: OS_ROLES) --os-client-id <auth-client-id> With v3oidcpassword: oauth 2.0 client id with v3oidcauthcode: OAuth 2.0 Client ID With v3oidcclientcredentials: OAuth 2.0 Client ID (Env: OS_CLIENT_ID) --os-client-secret <auth-client-secret> With v3oidcpassword: oauth 2.0 client secret with v3oidcauthcode: OAuth 2.0 Client Secret With v3oidcclientcredentials: OAuth 2.0 Client Secret (Env: OS_CLIENT_SECRET) --os-openid-scope <auth-openid-scope> With v3oidcpassword: openid connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. With v3oidcauthcode: OpenID Connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. With v3oidcclientcredentials: OpenID Connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. (Env: OS_OPENID_SCOPE) --os-access-token-endpoint <auth-access-token-endpoint> With v3oidcpassword: openid connect provider token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. With v3oidcauthcode: OpenID Connect Provider Token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. With v3oidcclientcredentials: OpenID Connect Provider Token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. (Env: OS_ACCESS_TOKEN_ENDPOINT) --os-discovery-endpoint <auth-discovery-endpoint> With v3oidcpassword: openid connect discovery document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well-known/openid- configuration With v3oidcauthcode: OpenID Connect Discovery Document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well-known/openid- configuration With v3oidcclientcredentials: OpenID Connect Discovery Document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well- known/openid-configuration (Env: OS_DISCOVERY_ENDPOINT) --os-access-token-type <auth-access-token-type> With v3oidcpassword: oauth 2.0 authorization server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" With v3oidcauthcode: OAuth 2.0 Authorization Server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" With v3oidcclientcredentials: OAuth 2.0 Authorization Server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" (Env: OS_ACCESS_TOKEN_TYPE) --os-username <auth-username> With v3oidcpassword: username with v1password: Username to login with With v3samlpassword: Username With v3totp: Username With http_basic: Username With v2password: Username to login with With v3adfspassword: Username With v3applicationcredential: Username With password: Username With v3password: Username (Env: OS_USERNAME) --os-password <auth-password> With v3oidcpassword: password with v1password: Password to use With v3samlpassword: Password With http_basic: User's password With v2password: Password to use With v3adfspassword: Password With password: User's password With v3password: User's password (Env: OS_PASSWORD) --os-user <auth-user> With gnocchi-basic: user (env: os_user) --os-identity-provider-url <auth-identity-provider-url> With v3samlpassword: an identity provider url, where the SAML2 authentication request will be sent. With v3adfspassword: An Identity Provider URL, where the SAML authentication request will be sent. (Env: OS_IDENTITY_PROVIDER_URL) --os-consumer-key <auth-consumer-key> With v3oauth1: oauth consumer id/key (env: OS_CONSUMER_KEY) --os-consumer-secret <auth-consumer-secret> With v3oauth1: oauth consumer secret (env: OS_CONSUMER_SECRET) --os-access-key <auth-access-key> With v3oauth1: oauth access key (env: os_access_key) --os-access-secret <auth-access-secret> With v3oauth1: oauth access secret (env: OS_ACCESS_SECRET) --os-user-domain-id <auth-user-domain-id> With v3totp: user's domain id with v3applicationcredential: User's domain id With password: User's domain id With v3password: User's domain id (Env: OS_USER_DOMAIN_ID) --os-user-domain-name <auth-user-domain-name> With v3totp: user's domain name with v3applicationcredential: User's domain name With password: User's domain name With v3password: User's domain name (Env: OS_USER_DOMAIN_NAME) --os-passcode <auth-passcode> With v3totp: user's totp passcode (env: os_passcode) --os-service-provider-endpoint <auth-service-provider-endpoint> With v3adfspassword: service provider's endpoint (env: OS_SERVICE_PROVIDER_ENDPOINT) --os-service-provider-entity-id <auth-service-provider-entity-id> With v3adfspassword: service provider's saml entity id (Env: OS_SERVICE_PROVIDER_ENTITY_ID) --os-application-credential-secret <auth-application-credential-secret> With v3applicationcredential: application credential auth secret (Env: OS_APPLICATION_CREDENTIAL_SECRET) --os-application-credential-id <auth-application-credential-id> With v3applicationcredential: application credential ID (Env: OS_APPLICATION_CREDENTIAL_ID) --os-application-credential-name <auth-application-credential-name> With v3applicationcredential: application credential name (Env: OS_APPLICATION_CREDENTIAL_NAME) --os-redirect-uri <auth-redirect-uri> With v3oidcauthcode: openid connect redirect url (env: OS_REDIRECT_URI) --os-code <auth-code> With v3oidcauthcode: oauth 2.0 authorization code (Env: OS_CODE) --os-aodh-endpoint <auth-aodh-endpoint> With aodh-noauth: aodh endpoint (env: OS_AODH_ENDPOINT) --os-auth-methods <auth-auth-methods> With v3multifactor: methods to authenticate with. (Env: OS_AUTH_METHODS) --os-default-domain-id <auth-default-domain-id> With token: optional domain id to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. With password: Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. (Env: OS_DEFAULT_DOMAIN_ID) --os-default-domain-name <auth-default-domain-name> With token: optional domain name to use with v3 api and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. With password: Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. (Env: OS_DEFAULT_DOMAIN_NAME)
[ "openstack [--version] [-v | -q] [--log-file LOG_FILE] [-h] [--debug] [--os-cloud <cloud-config-name>] [--os-region-name <auth-region-name>] [--os-cacert <ca-bundle-file>] [--os-cert <certificate-file>] [--os-key <key-file>] [--verify | --insecure] [--os-default-domain <auth-domain>] [--os-interface <interface>] [--os-service-provider <service_provider>] [--os-remote-project-name <remote_project_name> | --os-remote-project-id <remote_project_id>] [--os-remote-project-domain-name <remote_project_domain_name> | --os-remote-project-domain-id <remote_project_domain_id>] [--timing] [--os-beta-command] [--os-profile hmac-key] [--os-compute-api-version <compute-api-version>] [--os-identity-api-version <identity-api-version>] [--os-image-api-version <image-api-version>] [--os-network-api-version <network-api-version>] [--os-object-api-version <object-api-version>] [--os-volume-api-version <volume-api-version>] [--os-alarming-api-version <alarming-api-version>] [--os-metrics-api-version <metrics-api-version>] [--os-key-manager-api-version <key-manager-api-version>] [--os-dns-api-version <dns-api-version>] [--os-orchestration-api-version <orchestration-api-version>] [--inspector-api-version INSPECTOR_API_VERSION] [--inspector-url INSPECTOR_URL] [--os-baremetal-api-version <baremetal-api-version>] [--os-container-infra-api-version <container-infra-api-version>] [--os-share-api-version <shared-file-system-api-version>] [--os-workflow-api-version <workflow-api-version>] [--os-loadbalancer-api-version <loadbalancer-api-version>] [--os-data-processing-api-version <data-processing-api-version>] [--os-data-processing-url OS_DATA_PROCESSING_URL] [--os-tripleoclient-api-version <tripleoclient-api-version>] [--os-database-api-version <database-api-version>] [--os-queues-api-version <queues-api-version>] [--os-auth-type <auth-type>] [--os-auth-url <auth-auth-url>] [--os-system-scope <auth-system-scope>] [--os-domain-id <auth-domain-id>] [--os-domain-name <auth-domain-name>] [--os-project-id <auth-project-id>] [--os-project-name <auth-project-name>] [--os-project-domain-id <auth-project-domain-id>] [--os-project-domain-name <auth-project-domain-name>] [--os-trust-id <auth-trust-id>] [--os-identity-provider <auth-identity-provider>] [--os-protocol <auth-protocol>] [--os-access-token <auth-access-token>] [--os-endpoint <auth-endpoint>] [--os-token <auth-token>] [--os-user-id <auth-user-id>] [--os-roles <auth-roles>] [--os-client-id <auth-client-id>] [--os-client-secret <auth-client-secret>] [--os-openid-scope <auth-openid-scope>] [--os-access-token-endpoint <auth-access-token-endpoint>] [--os-discovery-endpoint <auth-discovery-endpoint>] [--os-access-token-type <auth-access-token-type>] [--os-username <auth-username>] [--os-password <auth-password>] [--os-user <auth-user>] [--os-identity-provider-url <auth-identity-provider-url>] [--os-consumer-key <auth-consumer-key>] [--os-consumer-secret <auth-consumer-secret>] [--os-access-key <auth-access-key>] [--os-access-secret <auth-access-secret>] [--os-user-domain-id <auth-user-domain-id>] [--os-user-domain-name <auth-user-domain-name>] [--os-passcode <auth-passcode>] [--os-service-provider-endpoint <auth-service-provider-endpoint>] [--os-service-provider-entity-id <auth-service-provider-entity-id>] [--os-application-credential-secret <auth-application-credential-secret>] [--os-application-credential-id <auth-application-credential-id>] [--os-application-credential-name <auth-application-credential-name>] [--os-redirect-uri <auth-redirect-uri>] [--os-code <auth-code>] [--os-aodh-endpoint <auth-aodh-endpoint>] [--os-auth-methods <auth-auth-methods>] [--os-default-domain-id <auth-default-domain-id>] [--os-default-domain-name <auth-default-domain-name>]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/the_openstack_client
4.100. irqbalance
4.100. irqbalance 4.100.1. RHBA-2012:0552 - irqbalance bug fix update Updated irqbalance packages that fix one bug are now available for Red Hat Enterprise Linux 6. The irqbalance package provides a daemon that evenly distributes interrupt request (IRQ) load across multiple CPUs for enhanced performance. Bug Fix BZ# 817873 The irqbalance daemon assigns each interrupt source in the system to a "class", which represents the type of the device (for example Networking, Storage or Media). Previously, irqbalance used the IRQ handler names from the /proc/interrupts file to decide the source class, which caused irqbalance to not recognize network interrupts correctly. As a consequence, systems using biosdevname NIC naming did not have their hardware interrupts distributed and pinned as expected. With this update, the device classification mechanism has been improved, and so ensures a better interrupts distribution. All users of irqbalance are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/irqbalance
Chapter 6. ClusterVersion [config.openshift.io/v1]
Chapter 6. ClusterVersion [config.openshift.io/v1] Description ClusterVersion is the configuration for the ClusterVersionOperator. This is where parameters related to automatic updates can be set. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the desired state of the cluster version - the operator will work to ensure that the desired version is applied to the cluster. status object status contains information about the available updates and any in-progress updates. 6.1.1. .spec Description spec is the desired state of the cluster version - the operator will work to ensure that the desired version is applied to the cluster. Type object Required clusterID Property Type Description capabilities object capabilities configures the installation of optional, core cluster components. A null value here is identical to an empty object; see the child properties for default semantics. channel string channel is an identifier for explicitly requesting that a non-default set of updates be applied to this cluster. The default channel will be contain stable updates that are appropriate for production clusters. clusterID string clusterID uniquely identifies this cluster. This is expected to be an RFC4122 UUID value (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx in hexadecimal values). This is a required field. desiredUpdate object desiredUpdate is an optional field that indicates the desired value of the cluster version. Setting this value will trigger an upgrade (if the current version does not match the desired version). The set of recommended update values is listed as part of available updates in status, and setting values outside that range may cause the upgrade to fail. You may specify the version field without setting image if an update exists with that version in the availableUpdates or history. If an upgrade fails the operator will halt and report status about the failing component. Setting the desired update value back to the version will cause a rollback to be attempted. Not all rollbacks will succeed. overrides array overrides is list of overides for components that are managed by cluster version operator. Marking a component unmanaged will prevent the operator from creating or updating the object. overrides[] object ComponentOverride allows overriding cluster version operator's behavior for a component. upstream string upstream may be used to specify the preferred update server. By default it will use the appropriate update server for the cluster and region. 6.1.2. .spec.capabilities Description capabilities configures the installation of optional, core cluster components. A null value here is identical to an empty object; see the child properties for default semantics. Type object Property Type Description additionalEnabledCapabilities array (string) additionalEnabledCapabilities extends the set of managed capabilities beyond the baseline defined in baselineCapabilitySet. The default is an empty set. baselineCapabilitySet string baselineCapabilitySet selects an initial set of optional capabilities to enable, which can be extended via additionalEnabledCapabilities. If unset, the cluster will choose a default, and the default may change over time. The current default is vCurrent. 6.1.3. .spec.desiredUpdate Description desiredUpdate is an optional field that indicates the desired value of the cluster version. Setting this value will trigger an upgrade (if the current version does not match the desired version). The set of recommended update values is listed as part of available updates in status, and setting values outside that range may cause the upgrade to fail. You may specify the version field without setting image if an update exists with that version in the availableUpdates or history. If an upgrade fails the operator will halt and report status about the failing component. Setting the desired update value back to the version will cause a rollback to be attempted. Not all rollbacks will succeed. Type object Property Type Description force boolean force allows an administrator to update to an image that has failed verification or upgradeable checks. This option should only be used when the authenticity of the provided image has been verified out of band because the provided image will run with full administrative access to the cluster. Do not use this flag with images that comes from unknown or potentially malicious sources. image string image is a container image location that contains the update. When this field is part of spec, image is optional if version is specified and the availableUpdates field contains a matching version. version string version is a semantic versioning identifying the update version. When this field is part of spec, version is optional if image is specified. 6.1.4. .spec.overrides Description overrides is list of overides for components that are managed by cluster version operator. Marking a component unmanaged will prevent the operator from creating or updating the object. Type array 6.1.5. .spec.overrides[] Description ComponentOverride allows overriding cluster version operator's behavior for a component. Type object Required group kind name namespace unmanaged Property Type Description group string group identifies the API group that the kind is in. kind string kind indentifies which object to override. name string name is the component's name. namespace string namespace is the component's namespace. If the resource is cluster scoped, the namespace should be empty. unmanaged boolean unmanaged controls if cluster version operator should stop managing the resources in this cluster. Default: false 6.1.6. .status Description status contains information about the available updates and any in-progress updates. Type object Required desired observedGeneration versionHash Property Type Description availableUpdates `` availableUpdates contains updates recommended for this cluster. Updates which appear in conditionalUpdates but not in availableUpdates may expose this cluster to known issues. This list may be empty if no updates are recommended, if the update service is unavailable, or if an invalid channel has been specified. capabilities object capabilities describes the state of optional, core cluster components. conditionalUpdates array conditionalUpdates contains the list of updates that may be recommended for this cluster if it meets specific required conditions. Consumers interested in the set of updates that are actually recommended for this cluster should use availableUpdates. This list may be empty if no updates are recommended, if the update service is unavailable, or if an empty or invalid channel has been specified. conditionalUpdates[] object ConditionalUpdate represents an update which is recommended to some clusters on the version the current cluster is reconciling, but which may not be recommended for the current cluster. conditions array conditions provides information about the cluster version. The condition "Available" is set to true if the desiredUpdate has been reached. The condition "Progressing" is set to true if an update is being applied. The condition "Degraded" is set to true if an update is currently blocked by a temporary or permanent error. Conditions are only valid for the current desiredUpdate when metadata.generation is equal to status.generation. conditions[] object ClusterOperatorStatusCondition represents the state of the operator's managed and monitored components. desired object desired is the version that the cluster is reconciling towards. If the cluster is not yet fully initialized desired will be set with the information available, which may be an image or a tag. history array history contains a list of the most recent versions applied to the cluster. This value may be empty during cluster startup, and then will be updated when a new update is being applied. The newest update is first in the list and it is ordered by recency. Updates in the history have state Completed if the rollout completed - if an update was failing or halfway applied the state will be Partial. Only a limited amount of update history is preserved. history[] object UpdateHistory is a single attempted update to the cluster. observedGeneration integer observedGeneration reports which version of the spec is being synced. If this value is not equal to metadata.generation, then the desired and conditions fields may represent a version. versionHash string versionHash is a fingerprint of the content that the cluster will be updated with. It is used by the operator to avoid unnecessary work and is for internal use only. 6.1.7. .status.capabilities Description capabilities describes the state of optional, core cluster components. Type object Property Type Description enabledCapabilities array (string) enabledCapabilities lists all the capabilities that are currently managed. knownCapabilities array (string) knownCapabilities lists all the capabilities known to the current cluster. 6.1.8. .status.conditionalUpdates Description conditionalUpdates contains the list of updates that may be recommended for this cluster if it meets specific required conditions. Consumers interested in the set of updates that are actually recommended for this cluster should use availableUpdates. This list may be empty if no updates are recommended, if the update service is unavailable, or if an empty or invalid channel has been specified. Type array 6.1.9. .status.conditionalUpdates[] Description ConditionalUpdate represents an update which is recommended to some clusters on the version the current cluster is reconciling, but which may not be recommended for the current cluster. Type object Required release risks Property Type Description conditions array conditions represents the observations of the conditional update's current status. Known types are: * Evaluating, for whether the cluster-version operator will attempt to evaluate any risks[].matchingRules. * Recommended, for whether the update is recommended for the current cluster. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } release object release is the target of the update. risks array risks represents the range of issues associated with updating to the target release. The cluster-version operator will evaluate all entries, and only recommend the update if there is at least one entry and all entries recommend the update. risks[] object ConditionalUpdateRisk represents a reason and cluster-state for not recommending a conditional update. 6.1.10. .status.conditionalUpdates[].conditions Description conditions represents the observations of the conditional update's current status. Known types are: * Evaluating, for whether the cluster-version operator will attempt to evaluate any risks[].matchingRules. * Recommended, for whether the update is recommended for the current cluster. Type array 6.1.11. .status.conditionalUpdates[].conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 6.1.12. .status.conditionalUpdates[].release Description release is the target of the update. Type object Property Type Description channels array (string) channels is the set of Cincinnati channels to which the release currently belongs. image string image is a container image location that contains the update. When this field is part of spec, image is optional if version is specified and the availableUpdates field contains a matching version. url string url contains information about this release. This URL is set by the 'url' metadata property on a release or the metadata returned by the update API and should be displayed as a link in user interfaces. The URL field may not be set for test or nightly releases. version string version is a semantic versioning identifying the update version. When this field is part of spec, version is optional if image is specified. 6.1.13. .status.conditionalUpdates[].risks Description risks represents the range of issues associated with updating to the target release. The cluster-version operator will evaluate all entries, and only recommend the update if there is at least one entry and all entries recommend the update. Type array 6.1.14. .status.conditionalUpdates[].risks[] Description ConditionalUpdateRisk represents a reason and cluster-state for not recommending a conditional update. Type object Required matchingRules message name url Property Type Description matchingRules array matchingRules is a slice of conditions for deciding which clusters match the risk and which do not. The slice is ordered by decreasing precedence. The cluster-version operator will walk the slice in order, and stop after the first it can successfully evaluate. If no condition can be successfully evaluated, the update will not be recommended. matchingRules[] object ClusterCondition is a union of typed cluster conditions. The 'type' property determines which of the type-specific properties are relevant. When evaluated on a cluster, the condition may match, not match, or fail to evaluate. message string message provides additional information about the risk of updating, in the event that matchingRules match the cluster state. This is only to be consumed by humans. It may contain Line Feed characters (U+000A), which should be rendered as new lines. name string name is the CamelCase reason for not recommending a conditional update, in the event that matchingRules match the cluster state. url string url contains information about this risk. 6.1.15. .status.conditionalUpdates[].risks[].matchingRules Description matchingRules is a slice of conditions for deciding which clusters match the risk and which do not. The slice is ordered by decreasing precedence. The cluster-version operator will walk the slice in order, and stop after the first it can successfully evaluate. If no condition can be successfully evaluated, the update will not be recommended. Type array 6.1.16. .status.conditionalUpdates[].risks[].matchingRules[] Description ClusterCondition is a union of typed cluster conditions. The 'type' property determines which of the type-specific properties are relevant. When evaluated on a cluster, the condition may match, not match, or fail to evaluate. Type object Required type Property Type Description promql object promQL represents a cluster condition based on PromQL. type string type represents the cluster-condition type. This defines the members and semantics of any additional properties. 6.1.17. .status.conditionalUpdates[].risks[].matchingRules[].promql Description promQL represents a cluster condition based on PromQL. Type object Required promql Property Type Description promql string PromQL is a PromQL query classifying clusters. This query query should return a 1 in the match case and a 0 in the does-not-match case. Queries which return no time series, or which return values besides 0 or 1, are evaluation failures. 6.1.18. .status.conditions Description conditions provides information about the cluster version. The condition "Available" is set to true if the desiredUpdate has been reached. The condition "Progressing" is set to true if an update is being applied. The condition "Degraded" is set to true if an update is currently blocked by a temporary or permanent error. Conditions are only valid for the current desiredUpdate when metadata.generation is equal to status.generation. Type array 6.1.19. .status.conditions[] Description ClusterOperatorStatusCondition represents the state of the operator's managed and monitored components. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string message provides additional information about the current condition. This is only to be consumed by humans. It may contain Line Feed characters (U+000A), which should be rendered as new lines. reason string reason is the CamelCase reason for the condition's current status. status string status of the condition, one of True, False, Unknown. type string type specifies the aspect reported by this condition. 6.1.20. .status.desired Description desired is the version that the cluster is reconciling towards. If the cluster is not yet fully initialized desired will be set with the information available, which may be an image or a tag. Type object Property Type Description channels array (string) channels is the set of Cincinnati channels to which the release currently belongs. image string image is a container image location that contains the update. When this field is part of spec, image is optional if version is specified and the availableUpdates field contains a matching version. url string url contains information about this release. This URL is set by the 'url' metadata property on a release or the metadata returned by the update API and should be displayed as a link in user interfaces. The URL field may not be set for test or nightly releases. version string version is a semantic versioning identifying the update version. When this field is part of spec, version is optional if image is specified. 6.1.21. .status.history Description history contains a list of the most recent versions applied to the cluster. This value may be empty during cluster startup, and then will be updated when a new update is being applied. The newest update is first in the list and it is ordered by recency. Updates in the history have state Completed if the rollout completed - if an update was failing or halfway applied the state will be Partial. Only a limited amount of update history is preserved. Type array 6.1.22. .status.history[] Description UpdateHistory is a single attempted update to the cluster. Type object Required image startedTime state verified Property Type Description acceptedRisks string acceptedRisks records risks which were accepted to initiate the update. For example, it may menition an Upgradeable=False or missing signature that was overriden via desiredUpdate.force, or an update that was initiated despite not being in the availableUpdates set of recommended update targets. completionTime `` completionTime, if set, is when the update was fully applied. The update that is currently being applied will have a null completion time. Completion time will always be set for entries that are not the current update (usually to the started time of the update). image string image is a container image location that contains the update. This value is always populated. startedTime string startedTime is the time at which the update was started. state string state reflects whether the update was fully applied. The Partial state indicates the update is not fully applied, while the Completed state indicates the update was successfully rolled out at least once (all parts of the update successfully applied). verified boolean verified indicates whether the provided update was properly verified before it was installed. If this is false the cluster may not be trusted. Verified does not cover upgradeable checks that depend on the cluster state at the time when the update target was accepted. version string version is a semantic versioning identifying the update version. If the requested image does not define a version, or if a failure occurs retrieving the image, this value may be empty. 6.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/clusterversions DELETE : delete collection of ClusterVersion GET : list objects of kind ClusterVersion POST : create a ClusterVersion /apis/config.openshift.io/v1/clusterversions/{name} DELETE : delete a ClusterVersion GET : read the specified ClusterVersion PATCH : partially update the specified ClusterVersion PUT : replace the specified ClusterVersion /apis/config.openshift.io/v1/clusterversions/{name}/status GET : read status of the specified ClusterVersion PATCH : partially update status of the specified ClusterVersion PUT : replace status of the specified ClusterVersion 6.2.1. /apis/config.openshift.io/v1/clusterversions Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterVersion Table 6.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterVersion Table 6.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.5. HTTP responses HTTP code Reponse body 200 - OK ClusterVersionList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterVersion Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body ClusterVersion schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 201 - Created ClusterVersion schema 202 - Accepted ClusterVersion schema 401 - Unauthorized Empty 6.2.2. /apis/config.openshift.io/v1/clusterversions/{name} Table 6.9. Global path parameters Parameter Type Description name string name of the ClusterVersion Table 6.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterVersion Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.12. Body parameters Parameter Type Description body DeleteOptions schema Table 6.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterVersion Table 6.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.15. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterVersion Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body Patch schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterVersion Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body ClusterVersion schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 201 - Created ClusterVersion schema 401 - Unauthorized Empty 6.2.3. /apis/config.openshift.io/v1/clusterversions/{name}/status Table 6.22. Global path parameters Parameter Type Description name string name of the ClusterVersion Table 6.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ClusterVersion Table 6.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.25. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterVersion Table 6.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.27. Body parameters Parameter Type Description body Patch schema Table 6.28. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterVersion Table 6.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.30. Body parameters Parameter Type Description body ClusterVersion schema Table 6.31. HTTP responses HTTP code Reponse body 200 - OK ClusterVersion schema 201 - Created ClusterVersion schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/clusterversion-config-openshift-io-v1
Chapter 24. Set Up the L1 Cache
Chapter 24. Set Up the L1 Cache 24.1. About the L1 Cache The Level 1 (or L1 ) cache stores remote cache entries after they are initially accessed, preventing unnecessary remote fetch operations for each subsequent use of the same entries. The L1 cache is only available when Red Hat JBoss Data Grid's cache mode is set to distribution. In other cache modes any configuration related to the L1 cache is ignored. When caches are configured with distributed mode, the entries are evenly distributed between all clustered caches. Each entry is copied to a desired number of owners, which can be less than the total number of caches. As a result, the system's scalability is improved but also means that some entries are not available on all nodes and must be fetched from their owner node. In this situation, configure the Cache component to use the L1 Cache to temporarily store entries that it does not own to prevent repeated fetching for subsequent uses. Each time a key is updated an invalidation message is generated. This message is multicast to each node that contains data that corresponds to current L1 cache entries. The invalidation message ensures that each of these nodes marks the relevant entry as invalidated. Also, when the location of an entry changes in the cluster, the corresponding L1 cache entry is invalidated to prevent outdated cache entries. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-set_up_the_l1_cache
7.232. xcb-util
7.232. xcb-util 7.232.1. RHBA-2015:1318 - xcb-util bug fix update Updated xcb-util packages that fix one bug are now available for Red Hat Enterprise Linux 6. The xcb-util packages provide a number of libraries which utilize libxcb, the core X protocol library, and some of the extension libraries. Bug Fix BZ# 1167486 The libxcb-icccm.so.1 file was replaced with libxcb-icccm.so.4 in the upgrade of the xcb-util packages. Consequently, packages that required the old file could not be installed anymore, or if such packages were installed, xcb-util could not be upgraded. With this update, the libxcb-icccm.so.1 file has been made available again in a new subpackage called compat-xcb-util. As a result, the dependency on libxcb-icccm.so.1 is satisfied. Users of xcb-util are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-xcb-util
Chapter 3. OAuthAuthorizeToken [oauth.openshift.io/v1]
Chapter 3. OAuthAuthorizeToken [oauth.openshift.io/v1] Description OAuthAuthorizeToken describes an OAuth authorization token Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources clientName string ClientName references the client that created this token. codeChallenge string CodeChallenge is the optional code_challenge associated with this authorization code, as described in rfc7636 codeChallengeMethod string CodeChallengeMethod is the optional code_challenge_method associated with this authorization code, as described in rfc7636 expiresIn integer ExpiresIn is the seconds from CreationTime before this token expires. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta redirectURI string RedirectURI is the redirection associated with the token. scopes array (string) Scopes is an array of the requested scopes. state string State data from request userName string UserName is the user name associated with this token userUID string UserUID is the unique UID associated with this token. UserUID and UserName must both match for this token to be valid. 3.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthauthorizetokens DELETE : delete collection of OAuthAuthorizeToken GET : list or watch objects of kind OAuthAuthorizeToken POST : create an OAuthAuthorizeToken /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens GET : watch individual changes to a list of OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthauthorizetokens/{name} DELETE : delete an OAuthAuthorizeToken GET : read the specified OAuthAuthorizeToken PATCH : partially update the specified OAuthAuthorizeToken PUT : replace the specified OAuthAuthorizeToken /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens/{name} GET : watch changes to an object of kind OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/oauth.openshift.io/v1/oauthauthorizetokens Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OAuthAuthorizeToken Table 3.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.3. Body parameters Parameter Type Description body DeleteOptions schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthAuthorizeToken Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeTokenList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthAuthorizeToken Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.8. Body parameters Parameter Type Description body OAuthAuthorizeToken schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 202 - Accepted OAuthAuthorizeToken schema 401 - Unauthorized Empty 3.2.2. /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens Table 3.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/oauth.openshift.io/v1/oauthauthorizetokens/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the OAuthAuthorizeToken Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OAuthAuthorizeToken Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 202 - Accepted OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthAuthorizeToken Table 3.17. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthAuthorizeToken Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthAuthorizeToken Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body OAuthAuthorizeToken schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 401 - Unauthorized Empty 3.2.4. /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the OAuthAuthorizeToken Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/oauth_apis/oauthauthorizetoken-oauth-openshift-io-v1
Chapter 29. Removing a Storage Device
Chapter 29. Removing a Storage Device Before removing access to the storage device itself, it is advisable to back up data from the device first. Afterwards, flush I/O and remove all operating system references to the device (as described below). If the device uses multipathing, then do this for the multipath "pseudo device" ( Section 28.1, "WWID" ) and each of the identifiers that represent a path to the device. If you are only removing a path to a multipath device, and other paths will remain, then the procedure is simpler, as described in Chapter 31, Adding a Storage Device or Path . Removal of a storage device is not recommended when the system is under memory pressure, since the I/O flush will add to the load. To determine the level of memory pressure, run the command vmstat 1 100 ; device removal is not recommended if: Free memory is less than 5% of the total memory in more than 10 samples per 100 (the command free can also be used to display the total memory). Swapping is active (non-zero si and so columns in the vmstat output). The general procedure for removing all access to a device is as follows: Procedure 29.1. Ensuring a Clean Device Removal Close all users of the device and backup device data as needed. Use umount to unmount any file systems that mounted the device. Remove the device from any md and LVM volume using it. If the device is a member of an LVM Volume group, then it may be necessary to move data off the device using the pvmove command, then use the vgreduce command to remove the physical volume, and (optionally) pvremove to remove the LVM metadata from the disk. If the device uses multipathing, run multipath -l and note all the paths to the device. Afterwards, remove the multipathed device using multipath -f device . Run blockdev --flushbufs device to flush any outstanding I/O to all paths to the device. This is particularly important for raw devices, where there is no umount or vgreduce operation to cause an I/O flush. Remove any reference to the device's path-based name, like /dev/sd , /dev/disk/by-path or the major:minor number, in applications, scripts, or utilities on the system. This is important in ensuring that different devices added in the future will not be mistaken for the current device. Finally, remove each path to the device from the SCSI subsystem. To do so, use the command echo 1 > /sys/block/ device-name /device/delete where device-name may be sde , for example. Another variation of this operation is echo 1 > /sys/class/scsi_device/ h : c : t : l /device/delete , where h is the HBA number, c is the channel on the HBA, t is the SCSI target ID, and l is the LUN. Note The older form of these commands, echo "scsi remove-single-device 0 0 0 0" > /proc/scsi/scsi , is deprecated. You can determine the device-name , HBA number, HBA channel, SCSI target ID and LUN for a device from various commands, such as lsscsi , scsi_id , multipath -l , and ls -l /dev/disk/by-* . After performing Procedure 29.1, "Ensuring a Clean Device Removal" , a device can be physically removed safely from a running system. It is not necessary to stop I/O to other devices while doing so. Other procedures, such as the physical removal of the device, followed by a rescan of the SCSI bus (as described in Chapter 34, Scanning Storage Interconnects ) to cause the operating system state to be updated to reflect the change, are not recommended. This will cause delays due to I/O timeouts, and devices may be removed unexpectedly. If it is necessary to perform a rescan of an interconnect, it must be done while I/O is paused, as described in Chapter 34, Scanning Storage Interconnects .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/removing_devices
Chapter 7. Configuring SCAP contents
Chapter 7. Configuring SCAP contents You can upload SCAP data streams and tailoring files to define compliance policies. 7.1. Loading the default SCAP contents By loading the default SCAP contents on Satellite Server, you ensure that the data streams from the SCAP Security Guide (SSG) are loaded and assigned to all organizations and locations. SSG is provided by the operating system of Satellite Server and installed in /usr/share/xml/scap/ssg/content/ . Note that the available data streams depend on the operating system version on which Satellite runs. You can only use this SCAP content to scan hosts that have the same minor RHEL version as your Satellite Server. For more information, see Section 7.2, "Getting supported SCAP contents for RHEL" . Prerequisites Your user account has a role assigned that has the create_scap_contents permission. Procedure Use the following Hammer command on Satellite Server: 7.2. Getting supported SCAP contents for RHEL You can get the latest SCAP Security Guide (SSG) for Red Hat Enterprise Linux on the Red Hat Customer Portal. You have to get a version of SSG that is designated for the minor RHEL version of your hosts. Procedure Access the SCAP Security Guide in the package browser . From the Version menu, select the latest SSG version for the minor version of RHEL that your hosts are running. For example, for RHEL 8.6, select a version named *.el8_6 . Download the package RPM. Extract the data-stream file ( *-ds.xml ) from the RPM. For example: Upload the data stream to Satellite. For more information, see Section 7.3, "Uploading additional SCAP content" . Additional resources Supported versions of the SCAP Security Guide in RHEL in the Red Hat Knowledgebase SCAP Security Guide profiles supported in RHEL 9 in Red Hat Enterprise Linux 9 Security hardening SCAP Security Guide profiles supported in RHEL 8 in Red Hat Enterprise Linux 8 Security hardening SCAP Security Guide profiles supported in RHEL 7 in the Red Hat Enterprise Linux 7 Security Guide 7.3. Uploading additional SCAP content You can upload additional SCAP content into Satellite Server, either content created by yourself or obtained elsewhere. Note that Red Hat only provides support for SCAP content obtained from Red Hat. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Your user account has a role assigned that has the create_scap_contents permission. You have acquired a SCAP data-stream file. Procedure In the Satellite web UI, navigate to Hosts > Compliance > SCAP contents . Click Upload New SCAP Content . Enter a title in the Title text box, such as My SCAP Content . In Scap File , click Choose file , navigate to the location containing a SCAP data-stream file and click Open . On the Locations tab, select locations. On the Organizations tab, select organizations. Click Submit . If the SCAP content file is loaded successfully, a message similar to Successfully created My SCAP Content is displayed. CLI procedure Place the SCAP data-stream file to a directory on your Satellite Server, such as /usr/share/xml/scap/my_content/ . Run the following Hammer command on Satellite Server: Verification List the available SCAP contents . The list of SCAP contents includes the new title. 7.4. Tailoring XCCDF profiles You can customize existing XCCDF profiles using tailoring files without editing the original SCAP content. A single tailoring file can contain customizations of multiple XCCDF profiles. You can create a tailoring file using the SCAP Workbench tool. For more information on using the SCAP Workbench tool, see Customizing SCAP Security Guide for your use case . Then you can assign a tailoring file to a compliance policy to customize an XCCDF profile in the policy. 7.5. Uploading a tailoring file After uploading a tailoring file, you can apply it in a compliance policy to customize an XCCDF profile. Prerequisites Your user account has a role assigned that has the create_tailoring_files permission. Procedure In the Satellite web UI, navigate to Hosts > Compliance > Tailoring Files and click New Tailoring File . Enter a name in the Name text box. Click Choose File , navigate to the location containing the tailoring file and select Open . Click Submit to upload the chosen tailoring file.
[ "hammer scap-content bulk-upload --type default", "rpm2cpio scap-security-guide-0.1.69-3.el8_6.noarch.rpm | cpio -iv --to-stdout ./usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml > ssg-rhel-8.6-ds.xml", "hammer scap-content bulk-upload --type directory --directory /usr/share/xml/scap/my_content/ --location \" My_Location \" --organization \" My_Organization \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_security_compliance/configuring_scap_contents_security-compliance
Chapter 3. Quay.io user interface overview
Chapter 3. Quay.io user interface overview The user interface (UI) of Quay.io is a fundamental component that serves as the user's gateway to managing and interacting with container images within the platform's ecosystem. Quay.io's UI is designed to provide an intuitive and user-friendly interface, making it easy for users of all skill levels to navigate and harness Quay.io's features and functionalities. This documentation section aims to introduce users to the key elements and functionalities of Quay.io's UI. It will cover essential aspects such as the UI's layout, navigation, and key features, providing a solid foundation for users to explore and make the most of Quay.io's container registry service. Throughout this documentation, step-by-step instructions, visual aids, and practical examples are provided on the following topics: Exploring applications and repositories Using the Quay.io tutorial Pricing and Quay.io plans Signing in and using Quay.io features Collectively, this document ensures that users can quickly grasp the UI's nuances and successfully navigate their containerization journey with Quay.io. 3.1. Quay.io landing page The Quay.io landing page serves as the central hub for users to access the container registry services offered. This page provides essential information and links to guide users in securely storing, building, and deploying container images effortlessly. The landing page of Quay.io includes links to the following resources: Explore . On this page, you can search the Quay.io database for various applications and repositories. Tutorial . On this page, you can take a step-by-step walkthrough that shows you how to use Quay.io. Pricing . On this page, you can learn about the various pricing tiers offered for Quay.io. There are also various FAQs addressed on this page. Sign in . By clicking this link, you are re-directed to sign into your Quay.io repository. . The landing page also includes information about scheduled maintenance. During scheduled maintenance, Quay.io is operational in read-only mode, and pulls function as normal. Pushes and builds are non-operational during scheduled maintenance. You can subscribe to updates regarding Quay.io maintenance by navigating to Quay.io Status page and clicking Subscribe To Updates . The landing page also includes links to the following resources: Documentation . This page provides documentation for using Quay.io. Terms . This page provides legal information about Red Hat Online Services. Privacy . This page provides information about Red Hat's Privacy Statement. Security . this page provides information about Quay.io security, including SSL/TLS, encryption, passwords, access controls, firewalls, and data resilience. About . This page includes information about packages and projects used and a brief history of the product. Contact . This page includes information about support and contacting the Red Hat Support Team. All Systems Operational . This page includes information the status of Quay.io and a brief history of maintenance. Cookies. By clicking this link, a popup box appears that allows you to set your cookie preferences. . You can also find information about Trying Red Hat Quay on premise or Trying Red Hat Quay on the cloud , which redirects you to the Pricing page. Each option offers a free trial. 3.1.1. Creating a Quay.io account New users of Quay.io are required to both Register for a Red Hat account and create a Quay.io username. These accounts are correlated, with two distinct differences: The Quay.io account can be used to push and pull container images or Open Container Initiative images to Quay.io to store images. The Red Hat account provides users access to the Quay.io user interface. For paying customers, this account can also be used to access images from the Red Hat Ecosystem Catalog , which can be pushed to their Quay.io repository. Users must first register for a Red Hat account, and then create a Quay.io account. Users need both accounts to properly use all features of Quay.io. 3.1.1.1. Registering for a Red Hat Account Use the following procedure to register for a Red Hat account for Quay.io. Procedure Navigate to the Red Hat Customer Portal . In navigation pane, click Log In . When navigated to the log in page, click Register for a Red Hat Account . Enter a Red Hat login ID. Enter a password. Enter the following personal information: First name Last name Email address Phone number Enter the following contact information that is relative to your country or region. For example: Country/region Address Postal code City County Select and agree to Red Hat's terms and conditions. Click Create my account . Navigate to Quay.io and log in. 3.1.1.2. Creating a Quay.io user account Use the following procedure to create a Quay.io user account. Prerequisites You have created a Red Hat account. Procedure If required, resolve the captcha by clicking I am not a robot and confirming. You are redirected to a Confirm Username page. On the Confirm Username page, enter a username. By default, a username is generated. If the same username already exists, a number is added at the end to make it unique. This username is be used as a namespace in the Quay Container Registry. After deciding on a username, click Confirm Username . You are redirected to the Quay.io Repositories page, which serves as a dedicated hub where users can access and manage their repositories with ease. From this page, users can efficiently organize, navigate, and interact with their container images and related resources. 3.1.1.3. Quay.io Single Sign On support Red Hat Single Sign On (SSO) can be used with Quay.io. Use the following procedure to set up Red Hat SSO with Quay.io. For most users, these accounts are already linked. However, for some legacy Quay.io users, this procedure might be required. Prerequisites You have created a Quay.io account. Procedure Navigate to to the Quay.io Recovery page . Enter your username and password, then click Sign in to Quay Container Registry . In the navigation pane, click your username Account Settings . In the navigation pane, click External Logins and Applications . Click Attach to Red Hat . If you are already signed into Red Hat SSO, your account is automatically linked. Otherwise, you are prompted to sign into Red Hat SSO by entering your Red Hat login or email, and the password. Alternatively, you might need to create a new account first. After signing into Red Hat SSO, you can choose to authenticate against Quay.io using your Red Hat account from the login page. Additional resources For more information, see Quay.io Now Supports Red Hat Single Sign On . 3.1.2. Exploring Quay.io The Quay.io Explore page is a valuable hub that allows users to delve into a vast collection of container images, applications, and repositories shared by the Quay.io community. With its intuitive and user-friendly design, the Explore page offers a powerful search function, enabling users to effortlessly discover containerized applications and resources. 3.1.3. Trying Quay.io (deprecated) Note The Red Hat Quay tutorial is currently deprecated and will be removed when the v2 UI goes generally available (GA). The Quay.io Tutorial page offers users and introduction to the Quay.io container registry service. By clicking Continue Tutorial users learn how to perform the following features on Quay.io: Logging into Quay Container Registry from the Docker CLI Starting a container Creating images from a container Pushing a repository to Quay Container Registry Viewing a repository Setting up build triggers Changing a repository's permissions 3.1.4. Information about Quay.io pricing In addition to a free tier, Quay.io also offers several paid plans that have enhanced benefits. The Quay.io Pricing page offers information about Quay.io plans and the associated prices of each plan. The cost of each tier can be found on the Pricing page. All Quay.io plans include the following benefits: Continuous integration Public repositories Robot accounts Teams SSL/TLS encryption Logging and auditing Invoice history Quay.io subscriptions are handled by the Stripe payment processing platform. A valid credit card is required to sign up for Quay.io. To sign up for Quay.io, use the following procedure. Procedure Navigate to the Quay.io Pricing page . Decide on a plan, for example, Small , and click Buy Now . You are redirected to the Create New Organization page. Enter the following information: Organization Name Organization Email Optional. You can select a different plan if you want a plan larger, than, for example, Small . Resolve that captcha, and select Create Organization . You are redirected to Stripe. Enter the following information: Card information , including MM/YY and the CVC Name on card Country or region ZIP (if applicable) Check the box if you want your information to be saved. Phone Number Click Subscribe after all boxes have been filled.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/about_quay_io/quayio-ui-overview
function::sock_fam_num2str
function::sock_fam_num2str Name function::sock_fam_num2str - Given a protocol family number, return a string representation Synopsis Arguments family The family number
[ "sock_fam_num2str:string(family:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sock-fam-num2str
Preface
Preface Red Hat offers administrators tools for gathering data for your Red Hat Quay deployment. You can use this data to troubleshoot your Red Hat Quay deployment yourself, or file a support ticket.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/troubleshooting_red_hat_quay/pr01
Chapter 9. Removing the kubeadmin user
Chapter 9. Removing the kubeadmin user 9.1. The kubeadmin user OpenShift Container Platform creates a cluster administrator, kubeadmin , after the installation process completes. This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program's output. For example: INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> 9.2. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system
[ "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authentication_and_authorization/removing-kubeadmin
33.7. Managing Reverse DNS Zones
33.7. Managing Reverse DNS Zones A reverse DNS zone can be identified in the following two ways: By the zone name, in the format reverse_ipv4_address .in-addr.arpa or reverse_ipv6_address .ip6.arpa . The reverse IP address is created by reversing the order of the components of the IP address. For example, if the IPv4 network is 192.0.2.0/24 , the reverse zone name is 2.0.192.in-addr.arpa. (with the trailing period). By the network address, in the format network_ip_address / subnet_mask_bit_count To create the reverse zone by its IP network, set the network information to the (forward-style) IP address, with the subnet mask bit count. The bit count must be a multiple of eight for IPv4 addresses or a multiple of four for IPv6 addresses. Adding a Reverse DNS Zone in the Web UI Open the Network Services tab, and select the DNS subtab, followed by the DNS Zones section. Figure 33.30. DNS Zone Management Click Add at the top of the list of all zones. Figure 33.31. Adding a Reverse DNS Zone Fill in the zone name or the reverse zone IP network. For example, to add a reverse DNS zone by the zone name: Figure 33.32. Creating a Reverse Zone by Name Alternatively, to add a reverse DNS zone by the reverse zone IP network: Figure 33.33. Creating a Reverse Zone by IP Network The validator for the Reverse zone IP network field warns you about an invalid network address during typing. The warning will disappear once you enter the full network address. Click Add to confirm the new reverse zone. Adding a Reverse DNS Zone from the Command Line To create a reverse DNS zone from the command line, use the ipa dnszone-add command. For example, to create the reverse zone by the zone name: Alternatively, to create the reverse zone by the IP network: Other Management Operations for Reverse DNS Zones Section 33.4, "Managing Master DNS Zones" describes other zone management operations, some of which are also applicable to reverse DNS zone management, such as editing or disabling and enabling DNS zones.
[ "[user@server]USD ipa dnszone-add 2.0.192.in-addr.arpa.", "[user@server ~]USD ipa dnszone-add --name-from-ip= 192.0.2.0/24" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-reverse-dns-zones
Chapter 1. Overview
Chapter 1. Overview Due to the growing demands on availability, one copy of data is not enough. To ensure business continuity, a reliable and highly available architecture must replicate data across more than just one system. Using multitarget system replication, the primary system can replicate data changes to more than one secondary system. For more information, see SAP HANA Multitarget System Replication . This document describes how to configure a replication site for disaster recovery using SAP HANA Multitarget System Replication on a 2-node cluster, installed as described in Automating SAP HANA Scale-Up System Replication using the RHEL HA Add-On . A sample configuration looks like this: The initial setup is as follows: Replicate Primary site 1 (DC1) to Secondary site 2 (DC2) Replicate Primary site 1 (DC1) to Secondary site 3 (DC3) If the primary fails, the primary switches to secondary site 2 (DC2) and the former primary site 1 (DC1) will become the secondary site. When failover occurs, this solution ensures that the configured primary site is switched at the third DR site as well. The configuration after failover is as follows: Primary running on DC2 Secondary running on DC1 (synced from DC2) Secondary running on DC3 (synced from DC2) The SAP HANA instance on remotehost3 will be automatically re-registered to the new primary as long as this instance is up and running during the failover. This document also describes the example of switching the primary database to the third site. Please note that further network configuration is required for the connection of the clients to the database. This is not within the scope of this document. For further information, please check the following: SAP HANA Administration Guide for SAP HANA Platform How to Setup SAP HANA Multi-Target System Replication
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_sap_hana_scale-up_multitarget_system_replication_for_disaster_recovery/asmb_overvieww_configuring-hana-scale-up-multitarget-system-replication-disaster-recovery
16.2. The Graphical Installation Program User Interface
16.2. The Graphical Installation Program User Interface If you have used a graphical user interface (GUI) before, you are already familiar with this process; use your mouse to navigate the screens, click buttons, or enter text fields. You can also navigate through the installation using the keyboard. The Tab key allows you to move around the screen, the Up and Down arrow keys to scroll through lists, + and - keys expand and collapse lists, while Space and Enter selects or removes from selection a highlighted item. You can also use the Alt + X key command combination as a way of clicking on buttons or making other screen selections, where X is replaced with any underlined letter appearing within that screen. If you would like to use a graphical installation with a system that does not have that capability, such as a partitioned system, you can use VNC or display forwarding. Both the VNC and display forwarding options require an active network during the installation and the use of boot time arguments. For more information on available boot time options, refer to Chapter 28, Boot Options Note If you do not wish to use the GUI installation program, the text mode installation program is also available. To start the text mode installation program, use the following command at the yaboot: prompt: Refer to Section 14.1, "The Boot Menu" for a description of the Red Hat Enterprise Linux boot menu and to Section 15.1, "The Text Mode Installation Program User Interface" for a brief overview of text mode installation instructions. It is highly recommended that installs be performed using the GUI installation program. The GUI installation program offers the full functionality of the Red Hat Enterprise Linux installation program, including LVM configuration which is not available during a text mode installation. Users who must use the text mode installation program can follow the GUI installation instructions and obtain all needed information.
[ "linux text" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-guimode-interface-ppc
10. Desktop
10. Desktop 10.1. Graphical Startup Red Hat Enterprise Linux 6 introduces a new, seamless graphical boot sequence that commences immediately after the hardware has initialized. Figure 8. Graphical Boot Screen The new graphical boot sequence provides the user with simple visual feedback on the progress of the system boot, and seamlessly switches to the login screen. The Red Hat Enterprise Linux 6 graphical boot sequence is enabled by the Kernel Modesetting feature and is available on ATI, Intel and NVIDIA graphics hardware. Note System Administrators are still able to view detailed progress of the boot sequence by pressing the F11 key at any time during the graphical boot. 10.2. Suspend and Resume Suspend and resume is a current feature in Red Hat Enterprise Linux that allows a machine to be placed into and removed from a low power state. The new kernel modesetting feature enables enhanced support for the suspend and resume feature. Previously, graphics hardware was suspended and resumed via userspace applications. In Red Hat Enterprise Linux 6, this functionality has moved into the kernel, providing a more reliable mechanism for enabling low power mode. 10.3. Multiple Display Support Red Hat Enterprise Linux 6 features enhanced support for workstations with multiple displays. When an additional display is attached to a machine, the graphics driver detects it and automatically adds it to the desktop. Conversely, when a display is unplugged, the graphics driver automatically removes it from the desktop. Note By default, the additional display is added in a spanning layout to the left of the current display. The automatic detection of additional displays is useful in situations where displays are added and removed frequently (e.g. setting up a laptop with an external projector) 10.3.1. Display Preferences The new Display Preferences dialog provides the ability to further customize multiple display layouts. Figure 9. Display Preferences dialog The new dialog provides the ability to instantly change the positioning, resolution, refresh rate and rotation settings for each individual display that is currently attached to a machine. 10.4. nouveau Driver for NVIDIA Graphics Devices Red Hat Enterprise Linux 6 features the new nouveau driver as default for NVIDIA graphics devices up to and including the NVIDIA GeForce 200 series. nouveau supports 2D and software video acceleration and kernel modesetting. Note The default driver for NVIDIA hardware (nv) is still available in Red Hat Enterprise Linux 6. 10.5. Internationalization 10.5.1. IBus Red Hat Enterprise Linux 6 introduces the Intelligent Input Bus (IBus) as the default input method framework for Asian languages. 10.5.2. Choosing and Configuring Input Methods Red Hat Enterprise Linux 6 includes im-chooser , a graphical user interface to enable and configure input methods. im-chooser (located under System > Preferences > Input Method in the main menu) allows the user to easily enable and configure the input methods available on the system. 10.5.3. Indic Onscreen Keyboard The new Indic Onscreen Keyboard (iok) is a screen based virtual keyboard for Indic languages, enabling input using Inscript keymap layouts and other 1:1 key mappings. 10.5.4. Indic Collation Support Red Hat Enterprise Linux 6 includes improved sorting for Indic languages. The order of menus and other interface elements are now correctly sorted in Indic languages. 10.5.5. Fonts Font support in Red Hat Enterprise Linux 6 has been improved, with updates to fonts for Chinese, Japanese, Korean, Indic and Thai languages. 10.6. Applications The majority of applications on the Red Hat Enterprise Linux 6 desktop have been updated. The following section documents the most notable updates. 10.6.1. Firefox Red Hat Enterprise Linux 6 introduces version 3.5 of the Mozilla Firefox web browser. For details on the new features in Firefox, refer to the Firefox Release Notes 10.6.2. Thunderbird 3 Red Hat Enterprise Linux 6 includes version 3 of the Mozilla Thunderbird email client, providing tabbed messaging, smart folders, and a message archive. For further details on new features in Thunderbird 3, refer to the Thunderbird Release Notes 10.6.3. OpenOffice.org 3.1 Red Hat Enterprise Linux 6 features OpenOffice.org 3.1, adding support for reading a wider range of file formats, including Microsoft Office OOXML format. Additionally, OpenOffice.org has improved file locking support and has the ability to render graphics using anti-aliasing. Figure 10. OpenOffice.org 3.1 Full details on all the features in this version of OpenOffice.org are available in the OpenOffice.org Release Notes . 10.7. NetworkManager NetworkManager is the desktop tool that is used to set up, configure and manage a wide range of network connection types. Figure 11. NetworkManager In Red Hat Enterprise Linux 6, NetworkManager provides enhanced support for Mobile Broadband devices, IPv6 and added support for connecting to Bluetooth Personal Area Network (PAN) devices. 10.8. KDE 4.3 Red Hat Enterprise Linux 6 provides KDE 4.3 as an alternative desktop enviroment. KDE 4.3 features an entirely new user experience, featuring: The new Plasma Desktop Workspace, including Plasma Widgets for a more customizable desktop. Oxygen, with enhanced icon and sound themes. Enhancements to the KDE Window Manager (kwin) Additionally, the dolphin file browser has replaced konqueror as the KDE default.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_release_notes/desktop
Authorization
Authorization Red Hat Developer Hub 1.3 Configuring authorization by using role based access control (RBAC) in Red Hat Developer Hub Red Hat Customer Content Services
[ "plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-rbac disabled: false", "permission: enabled: true rbac: admin: users: - name: user:default/ <your_policy_administrator_name>", "curl -v -H \"Authorization: Bearer <token> \" -X <method> \"https:// <my_developer_hub_url> / <endpoint> \" \\", "curl -v -H \"Content-Type: application/json\" -H \"Authorization: Bearer <token> \" -X POST \"https:// <my_developer_hub_url> / <endpoint> \" -d <body>", "curl -v -H \"Content-Type: application/json\" -H \"Authorization: Bearer <token> \" -X POST \"https:// <my_developer_hub_url> /api/permission/roles\" -d '{ \"memberReferences\": [\"group:default/example\"], \"name\": \"role:default/test\", \"metadata\": { \"description\": \"This is a test role\" } }'", "curl -v -H \"Content-Type: application/json\" -H \"Authorization: Bearer <token> \" -X PUT \"https:// <my_developer_hub_url> /api/permission/roles/role/default/test\" -d '{ \"oldRole\": { \"memberReferences\": [ \"group:default/example\" ], \"name\": \"role:default/test\" }, \"newRole\": { \"memberReferences\": [ \"group:default/example\", \"user:default/test\" ], \"name\": \"role:default/test\" } }'", "curl -v -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDtoken\" -X POST \"https:// <my_developer_hub_url> /api/permission/policies\" -d '[{ \"entityReference\":\"role:default/test\", \"permission\": \"catalog-entity\", \"policy\": \"read\", \"effect\":\"allow\" }]'", "curl -v -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDtoken\" -X PUT \"https:// <my_developer_hub_url> /api/permission/policies/role/default/test\" -d '{ \"oldPolicy\": [ { \"permission\": \"catalog-entity\", \"policy\": \"read\", \"effect\": \"allow\" } ], \"newPolicy\": [ { \"permission\": \"policy-entity\", \"policy\": \"read\", \"effect\": \"allow\" } ] }'", "curl -v -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDtoken\" -X POST \"https:// <my_developer_hub_url> /api/permission/roles/conditions\" -d '{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": {\"claims\": [\"group:default/janus-authors\"]} } }'", "curl -v -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDtoken\" -X PUT \"https:// <my_developer_hub_url> /api/permission/roles/conditions/1\" -d '{ \"result\":\"CONDITIONAL\", \"roleEntityRef\":\"role:default/test\", \"pluginId\":\"catalog\", \"resourceType\":\"catalog-entity\", \"permissionMapping\": [\"read\", \"update\", \"delete\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": {\"claims\": [\"group:default/janus-authors\"]} } }'", "[ { \"memberReferences\": [\"user:default/username\"], \"name\": \"role:default/guests\" }, { \"memberReferences\": [ \"group:default/groupname\", \"user:default/username\" ], \"name\": \"role:default/rbac_admin\" } ]", "[ { \"memberReferences\": [ \"group:default/groupname\", \"user:default/username\" ], \"name\": \"role:default/rbac_admin\" } ]", "{ \"memberReferences\": [\"group:default/test\"], \"name\": \"role:default/test_admin\" }", "201 Created", "{ \"oldRole\": { \"memberReferences\": [\"group:default/test\"], \"name\": \"role:default/test_admin\" }, \"newRole\": { \"memberReferences\": [\"group:default/test\", \"user:default/test2\"], \"name\": \"role:default/test_admin\" } }", "200 OK", "204", "204", "[ { \"entityReference\": \"role:default/test\", \"permission\": \"catalog-entity\", \"policy\": \"read\", \"effect\": \"allow\", \"metadata\": { \"source\": \"csv-file\" } }, { \"entityReference\": \"role:default/test\", \"permission\": \"catalog.entity.create\", \"policy\": \"use\", \"effect\": \"allow\", \"metadata\": { \"source\": \"csv-file\" } }, ]", "[ { \"entityReference\": \"role:default/test\", \"permission\": \"catalog-entity\", \"policy\": \"read\", \"effect\": \"allow\", \"metadata\": { \"source\": \"csv-file\" } }, { \"entityReference\": \"role:default/test\", \"permission\": \"catalog.entity.create\", \"policy\": \"use\", \"effect\": \"allow\", \"metadata\": { \"source\": \"csv-file\" } } ]", "[ { \"entityReference\": \"role:default/test\", \"permission\": \"catalog-entity\", \"policy\": \"read\", \"effect\": \"allow\" } ]", "201 Created", "{ \"oldPolicy\": [ { \"permission\": \"catalog-entity\", \"policy\": \"read\", \"effect\": \"allow\" }, { \"permission\": \"catalog.entity.create\", \"policy\": \"create\", \"effect\": \"allow\" } ], \"newPolicy\": [ { \"permission\": \"catalog-entity\", \"policy\": \"read\", \"effect\": \"deny\" }, { \"permission\": \"policy-entity\", \"policy\": \"read\", \"effect\": \"allow\" } ] }", "200", "204 No Content", "204 No Content", "[ { \"pluginId\": \"catalog\", \"policies\": [ { \"isResourced\": true, \"permission\": \"catalog-entity\", \"policy\": \"read\" }, { \"isResourced\": false, \"permission\": \"catalog.entity.create\", \"policy\": \"create\" }, { \"isResourced\": true, \"permission\": \"catalog-entity\", \"policy\": \"delete\" }, { \"isResourced\": true, \"permission\": \"catalog-entity\", \"policy\": \"update\" }, { \"isResourced\": false, \"permission\": \"catalog.location.read\", \"policy\": \"read\" }, { \"isResourced\": false, \"permission\": \"catalog.location.create\", \"policy\": \"create\" }, { \"isResourced\": false, \"permission\": \"catalog.location.delete\", \"policy\": \"delete\" } ] }, ]", "[ { \"pluginId\": \"catalog\", \"rules\": [ { \"name\": \"HAS_ANNOTATION\", \"description\": \"Allow entities with the specified annotation\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"annotation\": { \"type\": \"string\", \"description\": \"Name of the annotation to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the annotation to match on\" } }, \"required\": [ \"annotation\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_LABEL\", \"description\": \"Allow entities with the specified label\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"label\": { \"type\": \"string\", \"description\": \"Name of the label to match on\" } }, \"required\": [ \"label\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_METADATA\", \"description\": \"Allow entities with the specified metadata subfield\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"key\": { \"type\": \"string\", \"description\": \"Property within the entities metadata to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the given property to match on\" } }, \"required\": [ \"key\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_SPEC\", \"description\": \"Allow entities with the specified spec subfield\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"key\": { \"type\": \"string\", \"description\": \"Property within the entities spec to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the given property to match on\" } }, \"required\": [ \"key\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"IS_ENTITY_KIND\", \"description\": \"Allow entities matching a specified kind\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"kinds\": { \"type\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"List of kinds to match at least one of\" } }, \"required\": [ \"kinds\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"IS_ENTITY_OWNER\", \"description\": \"Allow entities owned by a specified claim\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"claims\": { \"type\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"List of claims to match at least one on within ownedBy\" } }, \"required\": [ \"claims\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } } ] } ... <another plugin condition parameter schemas> ]", "{ \"id\": 1, \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ] } }", "[ { \"id\": 1, \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ] } } ]", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } } }", "{ \"id\": 1 }", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ] } }", "200", "204", "curl -X GET \"http://localhost:7007/api/licensed-users-info/users/quantity\" -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDtoken\"", "{ \"quantity\": \"2\" }", "curl -X GET \"http://localhost:7007/api/licensed-users-info/users\" -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDtoken\"", "[ { \"userEntityRef\": \"user:default/dev\", \"lastTimeLogin\": \"Thu, 22 Aug 2024 16:27:41 GMT\", \"displayName\": \"John Leavy\", \"email\": \"[email protected]\" } ]", "curl -X GET \"http://localhost:7007/api/licensed-users-info/users\" -H \"Content-Type: text/csv\" -H \"Authorization: Bearer USDtoken\"", "userEntityRef,displayName,email,lastTimeLogin user:default/dev,John Leavy,[email protected],\"Thu, 22 Aug 2024 16:27:41 GMT\"", "p, <role_entity_reference> , <permission> , <action> , <allow_or_deny>", "g, <group_or_user> , <role_entity_reference>", "p, role:default/guests, catalog-entity, read, allow p, role:default/guests, catalog.entity.create, create, allow g, user:default/my-user, role:default/guests g, group:default/my-group, role:default/guests", "result: CONDITIONAL roleEntityRef: <role_entity_reference> pluginId: <plugin_id> permissionMapping: - read - update - delete conditions: <conditions>", "oc create configmap rbac-policies --from-file=rbac-policies.csv --from-file=rbac-conditional-policies.yaml", "apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage spec: application: extraFiles: mountPath: /opt/app-root/src configMaps: - name: rbac-policies", "permission: enabled: true rbac: conditionalPoliciesFile: /opt/app-root/src/rbac-conditional-policies.yaml policies-csv-file: /opt/app-root/src/rbac-policies.csv policyFileReload: true", "p, <role_entity_reference> , <permission> , <action> , <allow_or_deny>", "g, <group_or_user> , <role_entity_reference>", "p, role:default/guests, catalog-entity, read, allow p, role:default/guests, catalog.entity.create, create, allow g, user:default/my-user, role:default/guests g, group:default/my-group, role:default/guests", "result: CONDITIONAL roleEntityRef: <role_entity_reference> pluginId: <plugin_id> permissionMapping: - read - update - delete conditions: <conditions>", "oc create configmap rbac-policies --from-file=rbac-policies.csv --from-file=rbac-conditional-policies.yaml", "permission: enabled: true rbac: conditionalPoliciesFile: /opt/app-root/src/rbac-conditional-policies.yaml policies-csv-file: /opt/app-root/src/rbac-policies.csv policyFileReload: true", "permission enabled: true rbac: admin: users: - name: user:default/guest pluginsWithPermission: - catalog - permission - scaffolder", "auth: environment: development providers: guest: userEntityRef: user:default/guest dangerouslyAllowOutsideDevelopment: true", "p, role:default/myrole, catalog.entity.read, read, allow g, user:default/myuser, role:default/myrole p, role:default/another-role, catalog-entity, read, allow g, user:default/another-user, role:default/another-role", "p, role:default/myrole, catalog.entity.create, create, allow g, user:default/myuser, role:default/myrole", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"delete\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"USDcurrentUser\"] } } }", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"delete\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"USDownerRefs\"] } } }", "[ { \"pluginId\": \"catalog\", \"rules\": [ { \"name\": \"HAS_ANNOTATION\", \"description\": \"Allow entities with the specified annotation\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"annotation\": { \"type\": \"string\", \"description\": \"Name of the annotation to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the annotation to match on\" } }, \"required\": [ \"annotation\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_LABEL\", \"description\": \"Allow entities with the specified label\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"label\": { \"type\": \"string\", \"description\": \"Name of the label to match on\" } }, \"required\": [ \"label\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_METADATA\", \"description\": \"Allow entities with the specified metadata subfield\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"key\": { \"type\": \"string\", \"description\": \"Property within the entities metadata to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the given property to match on\" } }, \"required\": [ \"key\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_SPEC\", \"description\": \"Allow entities with the specified spec subfield\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"key\": { \"type\": \"string\", \"description\": \"Property within the entities spec to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the given property to match on\" } }, \"required\": [ \"key\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"IS_ENTITY_KIND\", \"description\": \"Allow entities matching a specified kind\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"kinds\": { \"type\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"List of kinds to match at least one of\" } }, \"required\": [ \"kinds\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"IS_ENTITY_OWNER\", \"description\": \"Allow entities owned by a specified claim\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"claims\": { \"type\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"List of claims to match at least one on within ownedBy\" } }, \"required\": [ \"claims\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } } ] } ... <another plugin condition parameter schemas> ]", "{ \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } } }", "{ \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ] }", "{ \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ], \"not\": { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Api\"] } } }", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ] } }", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"update\", \"delete\"], \"conditions\": { \"not\": { \"rule\": \"HAS_ANNOTATION\", \"resourceType\": \"catalog-entity\", \"params\": { \"annotation\": \"keycloak.org/realm\", \"value\": \"<YOUR_REALM>\" } } } }", "{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"scaffolder\", \"resourceType\": \"scaffolder-action\", \"permissionMapping\": [\"use\"], \"conditions\": { \"not\": { \"rule\": \"HAS_ACTION_ID\", \"resourceType\": \"scaffolder-action\", \"params\": { \"actionId\": \"quay:create-repository\" } } } }" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html-single/authorization/index